Daniel McKemie: Maximalism – Volume 3
Maximalism is a series of recordings and essays that deal with the multidimensional properties of recorded sound within the context of modern compositional aesthetics. It is the author's attempt to make sense of the current state of things, to speculate upon where we may be heading next, and to consider why this aesthetic direction may be of some inherent value.
Positioning the frequency domain along the y-axis is arguably the simplest and historically-embedded axis to determinately control. The pitch-centric foundations of Western notation lends itself perfectly to this concept. In electronic music, a single sound source as simple as a sine tone can be swept along this axis seamlessly and with a high degree of specificity. Traditional western music focuses on the use of predetermined gradations of the frequency spectrum, ie. or pitch, in a functional manner. Harmonic motion like ii-V-I and I-IV-V have worked for hundreds of years, and are still effective in the structural development of tonal music. There are also alternative tuning systems informed by non-western musics and mathematical thought.
Twelve-tone and serial music democratized the pitch field, microtonal music divided the octave beyond the Western 12 pitches, spectral music used frequency analyses of timbre to inform pitch content, and electronic music represents a completely a breadth of sonic practices with no particular reverence.
Karlheinz Stockhausen applied serial techniques to electronic music, a significant move, but an even more important exploration was the use of reverb. Beginning in his earliest works Studies I & II, by using only sine tones and space he created an incredibly rich tapestry of frequencies using only the most focused timbral element. One striking feature of these works is the use of space to expand the y-axis. The focus was not so much about the frequency of the sine tone by itself on tape, but the combination of tones created when reverb was introduced. This is easily replicated with current technology by taking a test tone, routing it through any reverberation module, and increasing the amount of reverb; one begins to hear a growth of sum and difference tones relative to the effect. In the 1950s this was done using physical spaces, and can now be done with even the simplest audio plugins or hardware processors. The details of a space, real or imaginary, can be defined to make even the simplest of sounds have a much more prominent presence in the frequency domain. This can be moved to the physical space through the use of devices like spring and plate reverb units as well as multichannel playback using a variety of approaches to speaker placement. While space is potentially quite modular in this regard, the most modular piece of this particular sonic puzzle is the listener and their position in said space.
Refraction is the deflection and changes to speed of sound waves based on how it travels through and around mediums of varying density. This also applies to light, radio, and electromagnetic waves, and to this end, something like an aquarium showcases this phenomenon quite clearly. The angle at which the viewer peers into the tank can drastically change the general appearance of the objects inside and distort one’s perception of their location in the three-dimensional space. As a sound in the tank is represented by a cube, the characteristics of that sound may change based on the position of the listener.
If any barriers exist between the sound source and the listener, selected frequencies may be filtered and/or amplified. If the listener moves to a different location in the space, the sound may reflect off of a different surface, enabling different areas of the spectra to be filtered and amplified. Depending on the sonic qualities of the sound source, pure feedback for example, simply turning one’s head can cause quite drastic shifts in what frequencies are perceived by the listener.
The y-axis as a descriptor of some sonic entity can expand beyond pitch or frequency information and into the physical space. But how does one replicate or realize these spaces outside of the recording? These spatial characteristics could inform the sound material itself, be it frequency or timbre. A common approach is to process the audio using spatialization and reverb after the music has been written, effectively repositioning two-dimensional audio into a three-dimensional space. A proposed idea is to use multiple processing units for space versus one room that is defined at the end of the chain, and also to plan these spaces first before constructing any audio material. In other words, writing for or into the space, albeit an imaginary space generated by software and/or hardware. This requires the development of a space, or spaces, and that although these spaces may change over time in some (in)determinate way, this space template of sorts will likely have an impact on the creation and realization of the audio content for the piece. This is one way out of many to explore the y-axis without focusing on frequency alone.