Hostname: page-component-7c8c6479df-995ml Total loading time: 0 Render date: 2024-03-27T19:16:20.042Z Has data issue: false hasContentIssue false

Developing Telematic Electroacoustic Music: Complex networks, machine intelligence and affective data stream sonification

Published online by Cambridge University Press:  05 March 2015

Ian Whalley*
Affiliation:
The University of Waikato, Private Bag 3105, Hamilton, New Zealand

Abstract

This paper proposes expanding telematic electroacoustic music practice through the consideration of affective computing and integration with complex data streams. Current telematic electroacoustic music practice, despite the distances involved, is largely embedded in older music/sonic arts paradigms. For example, it is dominated by using concert halls, by concerns about the relationship between people and machines, and by concerns about geographically distributed cultures and natural environments. A more suitable environment for telematic sonic works is found in the inter-relationship between ‘players’ and broader contemporary networked life – one embedded in multiple real-time informational data streams. These streams will increase rapidly with the expansion of the Internet of Things (IoT), and with the increasing deployment of algorithmic decision-making and machine learning software. While collated data streams, such as news feeds, are often rendered visually, they are also partly interpreted through embodied cognition that is similar to music and sonic art interpretation. A meeting point for telematic electroacoustic music and real-time data sonification is in affective composition/performance models and data sonification. These allow for the sonic exploration of participants’ place in a matrix of increasingly networked relationships.

Type
Articles
Copyright
© Cambridge University Press 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

ABI Research. 2013. More Than 30 Billion Devices Will Wirelessly Connect to the Internet of Everything in 2020. ABI Research,www.abiresearch.com/press/more-than-30-billion-devices-will-wirelessly-conne (ccessed 1 July 2014).Google Scholar
Calvo, R., D’Mello, S., Gratch, J. and Kappas, A. (eds.) 2014. The Oxford Handbook of Affective Computing. Oxford: Oxford University Press.Google Scholar
Dean, R. (ed.) 2009. The Oxford Handbook of Computer Music. Oxford: Oxford University Press.Google Scholar
Economist, The. 2014. Creative Destruction. The Economist (28 June): 12.Google Scholar
Fauconnier, G. and Turner, M. 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books.Google Scholar
Fiebrink, R., Trueman, D. and Cook, P. 2009. A Meta-Instrument for Interactive, On-the-Fly Machine Learning. NIME09, 3–6 June 2009, Pittsburgh, PA.Google Scholar
Fields, K. and Whalley, I. (eds.) 2012. Networked Electroacoustic Music. Organised Sound 17(1).Google Scholar
Follmer, G. 2005. Electronic, Aesthetic and Social Factors in Net Music. Organised Sound 10(3): 185192.Google Scholar
Friedman, T. 2007. The World is Flat 3.0. A Brief History of the Twenty-First Century. London: Picador.Google Scholar
Gartner. 2013. Gartner Says the Internet of Things Installed Base Will Grow to 26 Billion Units by 2020. Gartner. www.gartner.com/newsroom/id/2636073 (accessed 1 July 2014).Google Scholar
Godøy, R. 2006. Gestural-Sonorus Objects: Embodied Extensions of Schaeffer’s Conceptual Apparatus. Organised Sound 11(2): 149157.Google Scholar
Godøy, R. 2010. Images of Sonic Objects. Organised Sound 12(1): 5462.Google Scholar
Godøy, R. and Leman, M. (eds.) 2010. Musical Gestures: Sound, Moverment and Meaning. New York and London: Routledge.CrossRefGoogle Scholar
Grossman, J. 2010. From Metaphor to Medium: Sonification as Extension of Our Body, in Proc. of ICAD, Washington, D.C.Google Scholar
Hermann, T., Hunt, A. and Neuhoff, J. (eds.) 2011. The Sonification Handbook. Berlin: Logos Verlag.Google Scholar
Hofer, S. 2014. ‘Atomic’ Music: Navigating Experimental Electronica and Sound Art through Microsound. Organised Sound 19(3): 298306.CrossRefGoogle Scholar
Höller, J., Tsiatsis, V., Mulligan, C., Karnouskos, S., Avesand, S. and Boyle, D. 2014. From Machine-to-Machine to the Internet of Things: Introduction to a New Age of Intelligence. Oxford: Academic Press.Google Scholar
Hughes, R. 1980. Trouble in Utopia, Episode 4 of The Shock of the New. BBC Television.Google Scholar
Juslin, P. and Sloboda, J. (eds.) 2001. Music and Emotion: Theory and Research. Oxford: Oxford University Press.CrossRefGoogle Scholar
Kendall, G. 2014. The Feeling Blend: Feeling and Emotion in Electroacoustic Art. Organised Sound 19(2): 192202.Google Scholar
Kirke, A. and Miranda, E. 2012. Application of Pulsed Melodic Affective Processing to Stock Market Algorithmic Trading and Analysis, in Proceedings of 9th International Symposium on Computer Music Modeling and Retrieval.Google Scholar
Lewis, M. 2014. Flash Boys: A Wall Street Revolt. New York: Norton.Google Scholar
Plutchik, R. 2001. The Nature of Emotions. American Scientist 89(4): 344350.Google Scholar
Rebelo, P. 2009. Dramaturgy in the Network. Contemporary Music Review 28: 387393.Google Scholar
Schedel, M. and Worrall, D. (eds.) 2014. Sonification. Organised Sound 19(1).Google Scholar
Stapleton, M. 2011. Proper Embodiment: The Role of the Body in Affect and Cognition. PhD thesis, University of Edinburgh.Google Scholar
Vickers, P. and Hogg, B. 2006. Sonification Abstraite/Sonification Concrèt: An Aesthetic Perspective Space for Classifying Auditory Displays in the Ars Musica Domain, in Proceedings of the 12th International Conference on Auditory Display, London.Google Scholar
Wanderley, M. and Winters, M. 2014. Sonification of Emotion: Strategies and Results from the Intersection with Music. Organised Sound 19(1): 6069.Google Scholar
Weinberg, G. 2005. Interconnected Musical Networks: Toward a Theoretical Framework. Computer Music Journal 29(2): 2329.CrossRefGoogle Scholar
Whalley, I. 2009. Software Agents in Music and Sound Art Research/Creative Work: Current State and a Possible Direction. Organised Sound 14(2): 156167.CrossRefGoogle Scholar
Whalley, I. 2012. Internet2 and Global Electroacoustic Music: Navigating a Decision Space of Production, Relationships and Languages. Organised Sound 17(1): 415.Google Scholar
Whalley, I. 2014. GNMISS: A Scoring System for Internet2 Electroacoustic Music. Organised Sound 19(3): 244259.Google Scholar
Witten, I., Eibe, F. and Hall, M. 2011. Data Mining: Practical Machine Learning Tools and Techniques. Burlington, MA: Morgan Kaufmann Publishers.Google Scholar
Wolfe, K. 2014. Sonification and the Mysticism of Negation. Organised Sound 19(3): 307312.Google Scholar
Worrall, D. 2009. The Use of Sonic Articulation in Identifying Correlation in Capital Market Trading Data, in Proceedings of the International Conference on Auditory Display, Copenhagen.Google Scholar
Worrall, D. 2011. A Method for Developing an Improved Mapping Model for Data Sonification, in Proceedings of the International Conference on Auditory Display, Budapest.Google Scholar
Worrall, D. 2013. Understanding the Need for Micro-Gestural Inflections in Parameter-Mapping Sonification, in Proceedings of the International Conference on Auditory Display, Lodz.Google Scholar