Real-time Audio visualization is a technology that is available, but not necessarily accessible en masse. Using open-source code and technologies, my goal is to prototype a real-time audio-visualizer that would be pragmatic for a local band or DIY venue to use to improve the experience of their audiences as well as being hackable to further customize the experience.
At its core would be the analytic power of the language Processing and more importantly the sound library integrated into Processing 3. Once the data from the sound has been processed, the values can be applied to variables in a program that perpetually draws imagery in response to the input.
The resultant image would be displayed through projection onto a wall or ceiling of the venue, coming from a piece of equipment that would be of comparable size and form to stardard speaker equipment.
Over the next two weeks, I will be going through the availible documentation for the sound library, with focusses in the AudioIn and FFT functions, the use of Object Based Programming and Particle systems in Processing as a means to apply behavior to multiple objects in a composition that can be manipulated by recorded sound rather than traditional computer inputs, and finally, from a technical perspective, the appropriate hardware to recieve the audio input, run the final program, and project it.
From a less technical perspective, I will be conducting research into the theory behind music, its relationship to sound as a whole, and how audio and light interact as one experience from both an artistic and objective perspective.