Hardware

Processing on Raspberry Pi

https://learn.adafruit.com/processing-on-the-raspberry-pi-and-pitft/processing

Running a Sketch Without the Editor

In the previous section you saw how to run Processing and use its editor to create and run a sketch.  However you might find it more useful to run a sketch directly on the Pi without using the editor.  This is great for small displays like the PiTFT where using the Processing editor isn't easy.  You can also use this to run a sketch from a command terminal outside the graphic environment (like when connected to the Pi using SSH).

First make sure the graphical desktop environment is running.  Even though you aren't logging in and running commands in the desktop it still needs to run to show the Processing sketch.  Remember you can have the desktop automatically start on boot using the raspi-config command in the previous section (be sure to set the pi to boot and automatically log in).

Next copy your Processing sketch code to the Pi.  Remember a Processing sketch includes both a .pde file with the code and the directory that contains the .pde file.  For example a Processing sketch called HelloWorld would have a directory called HelloWorld and a file inside that directory called HelloWorld.pde.  Copy that directory and the files inside it to the Pi.

You can use a tool like FileZilla to connect to the Pi using SFTP and copy over files from your computer.  For example here's a picture of uploading a sketch called HelloWorld from my ProcessingSketchbook folder on my computer to the /home/pi folder on the Pi:


Take Away

This adafruit tutorial shows how I can automatically run a sketch on the Raspberry Pi platform. Given the general flexibility of raspberry pi, its small size, and its price point, it seems like the best hardware to create my own system.

Rendering a Sine Wave

Sine Wave by Daniel Shiffman - Example on Processing.org

https://processing.org/examples/sinewave.html

int xspacing = 16;   // How far apart should each horizontal location be spaced
int w;              // Width of entire wave

float theta = 0.0;  // Start angle at 0
float amplitude = 75.0;  // Height of wave
float period = 500.0;  // How many pixels before the wave repeats
float dx;  // Value for incrementing X, a function of period and xspacing
float[] yvalues;  // Using an array to store height values for the wave

void setup() {
  size(640, 360);
  w = width+16;
  dx = (TWO_PI / period) * xspacing;
  yvalues = new float[w/xspacing];
}

void draw() {
  background(0);
  calcWave();
  renderWave();
}

void calcWave() {
  // Increment theta (try different values for 'angular velocity' here
  theta += 0.02;

  // For every x value, calculate a y value with sine function
  float x = theta;
  for (int i = 0; i < yvalues.length; i++) {
    yvalues[i] = sin(x)*amplitude;
    x+=dx;
  }
}

void renderWave() {
  noStroke();
  fill(255);
  // A simple way to draw the wave with an ellipse at each location
  for (int x = 0; x < yvalues.length; x++) {
    ellipse(x*xspacing, height/2+yvalues[x], 16, 16);
  }
}

The example uses a sine calculation to generate a regular set of Y values for a group of circles that are rendered at each position. 

It has been thoroughly commented, making most of the breakdown I would do pointless.

Takeaway

I find the repetitive motion of a sine wave is an interesting visual to be associated with music and sound. The construction and rendering are also relatively simple to break down and modify. The imagery I imagine for my visualizer consists of three sine waves responding to the different sections of the frequency spectrum. In addition, I think adding a similar commenting method could make my system more accessible to users for modification.

Processing 3 Sound Library FFT

Processing Sound FFT Class

https://processing.org/reference/libraries/sound/FFT.html

import processing.sound.*;

FFT fft;
AudioIn in;
int bands = 512;
float[] spectrum = new float[bands];

void setup() {
  size(512, 360);
  background(255);
    
  // Create an Input stream which is routed into the Amplitude analyzer
  fft = new FFT(this, bands);
  in = new AudioIn(this, 0);
  
  // start the Audio Input
  in.start();
  
  // patch the AudioIn
  fft.input(in);
}      

void draw() { 
  background(255);
  fft.analyze(spectrum);

  for(int i = 0; i < bands; i++){
  // The result of the FFT is normalized
  // draw the line for frequency band i scaling it up by 5 to get more amplitude.
  line( i, height, i, height - spectrum[i]*height*5 );
  } 
}

DescriptionThis is a Fast Fourier Transform (FFT) analyzer. It calculates the normalized power spectrum of an audio stream the moment it is queried with the analyze() method.

Methods

input()Define the audio input for the analyzer.

analyze()Queries a value from the analyzer and returns a vector the size of the pre-defined number of bands.

Constructor

FFT(theParent, fftSize)

Here the variable bands controls the FFTsize which based on on the brief explanation found at spectraplus.com can be used to calculate the resolution. Reducing the number of bands reduces the distance between frequencies that are tracked by the program. 

The result of this example can be seen below. The manner that it represents the sound being processed through the computer's selected input is relatively straightforward but not particularly elegant.

Screen Shot 2018-05-01 at 13.40.43.png

From here, the numbers that generate each line's height could could be pulled into other rendering methods. 

Take Away

While this example was beneficial as a way of breaking down FFT, it lacks a way for me to target specific ranges. Modifying the code shows me how I can create a spectrum of color that changes with each band, so I am inclined to believe that incorporating while loops to specify specific bands, I can generate a distinct value of any range. 

Processing 3 Sound Library AudioIn Class

Sound - Processing Libraries 

https://processing.org/reference/libraries/sound/

AudioIn Class

import processing.sound.*;
AudioIn in;

void setup() {
  size(640, 360);
  background(255);
    
  // Create the Input stream
  in = new AudioIn(this, 0);
  in.play();
}      

void draw() {
}

DescriptionAudioIn let's you grab the audio input from your soundcard.

Methods

start()Starts the input stream.

play()Start the Input Stream and route it to the Audio Hardware Output

set()Set multiple parameters at once.

amp()Change the amplitude/volume of the input steam.

add()Offset the output of the input stream by given value

pan()Move the sound in a stereo panorama

stop()Stop the input stream.

Constructor

AudioIn(theParent, in)

Takeaway

The AudioIn Class seems to be the foundation for the other classes in the sound library. After declaration, in.start() seems the most appropriate for the analysis rather than in.play() as seen in this example.

Object Based Programming in Processing


// Declare and construct two objects (h1, h2) from the class HLine

HLine h1 = new HLine(20, 2.0);
HLine h2 = new HLine(50, 2.5);

void setup() {
size(200, 200); frameRate(30);
}

void draw() {
background(204);
h1.update();
h2.update();
}

class HLine {
float ypos, speed;
HLine (float y, float s) {
ypos = y;
speed = s;
}

void update() { ypos += speed; if (ypos > height) { ypos = 0;
}
line(0, ypos, width, ypos);
}

Description: Keyword used to indicate the declaration of a class. A class is a composite of fields (data) and methods (functions that are a part of the class) which may be instantiated as objects. The first letter of a class name is usually uppercase to separate it from other kinds of variables. A related tutorial on Object-Oriented Programming is hosted on the Oracle website.

Syntax

class ClassName {
  statements
}

Instances of the objects have to be individually defined, with the objects operating as a collection of functions wrapped up in the structure class. Other examples have shown me that an Array can be used to create, store and manage large numbers of instances of the same object.

The objects rely on parameters that tie back into the  statements in the class to function. This has been consistent in the other examples I have seen containing objects, and when I tried to write code without it, a Null value error was returned.

While not present in all of the user examples I have seen, the .update() function seems like an efficient way of managing the interactivity and motion, allowing me to keep everything in the class.

The class gets called like a function, parameters included, in a constructor that has all of the data associated with the object stored within it.

As mentioned above, I like the idea of using an update function to manage the objects over time. In this example, the function used to display the objects has been incorporated into the update function.

After trying my hand at creating an object based composition with processing, depending on how hard it is, review of the Oracle lesson may be necessary.
 


Take Away

Before, I was thinking about using color and/or shape to visualize the audio, but I think the more appropriate route will be to create a set of objects with responsive rather than generated behaviors and have them either be continually created or continually updated in response to the audio input.

Plan of Research

Real-time Audio visualization is a technology that is available, but not necessarily accessible en masse. Using open-source code and technologies, my goal is to prototype a real-time audio-visualizer that would be pragmatic for a local band or DIY venue to use to improve the experience of their audiences as well as being hackable to further customize the experience.

At its core would be the analytic power of the language Processing and more importantly the sound library integrated into Processing 3. Once the data from the sound has been processed, the values can be applied to variables in a program that perpetually draws imagery in response to the input.

The resultant image would be displayed through projection onto a wall or ceiling of the venue, coming from a piece of equipment that would be of comparable size and form to stardard speaker equipment.

Over the next two weeks, I will be going through the availible documentation for the sound library, with focusses in the AudioIn and FFT functions, the use of Object Based Programming and Particle systems in Processing as a means to apply behavior to multiple objects in a composition that can be manipulated by recorded sound rather than traditional computer inputs, and finally, from a technical perspective, the appropriate hardware to recieve the audio input, run the final program, and project it.

From a less technical perspective, I will be conducting research into the theory behind music, its relationship to sound as a whole, and how audio and light interact as one experience from both an artistic and objective perspective.