Final Project Documentation

For my final project, I have created a code for 5 min live coding performance. Initially I wanted to simply make a visualization of a song that I like but as I progressed and learned more about gibber library, I started to make my own sound. The interactive experience that my project provides is a little different from the interactive sketches that I’ve created previously, since the audience is just watching me interact with my code. However, I kind of fell in love with this form of performance and live coding and I would like to explore further into this field in the future.

My code is written in the order of execution rather than being organized by each object or sound element. This means a code for manipulating the element of the variable ‘b’  might be written in between ‘e’ and ‘f’. Also, each body of code divided by the spacings is meant to be highlighted all together and executed all at once.

The main idea behind the progression of this audiovisual performance is that it starts off with a stable and rhythmical audio and visual pattern and as the code progresses the pattern loses its stability and goes into a complete chaos until a voice calls out for help. The rhythm of drums, plucks, and bass from the beginning matches (kind of) and putting them in a melody (D, E, Bb, A) makes the sound stick to your head. Then, I introduce a sad man’s voice with a white noise, indicating a transition. The sounds and visuals introduced after this are less rhythmical and more dreamy and formless. On top of these dreamy synth sounds, I start to add distortions and panning to break the sound even further. The visual is pixellated and modified with multiple filters/shades to reflect the sound. Although many components of this performance are organized and planned, several parts, such as notes and rhythms for each instrument/sound, were randomized so while practicing the performance, I would like one performance better than another one because it sounds better. I ran into many cases where the plucking (has a random rhythm) did not quite match the drums, so I would panic from the beginning of the code but I would just figure it out by changing the rhythm or just moving on.

The work process had three parts: 1) putting together the audio element. 2) making each section of visualization. 3) Integrating them all together and deciding the order of the code for the performance. Part 1 was fun and only required me to decide on instruments that I want, notes, and rhythm and put them together so that the whole thing sounds good to me. It started to get slightly more challenging with from part 2 because the visual elements should reflect on the audio that I’ve created. One thing that I learned during this process is that a good sound visualization does not always reflect the master output. For example, the part where I introduce the robotic voice, I made the visual reflect the voice output rather than the master output because that makes the visualization more dynamic and look authentic to the audience. The last part was about organizing everything so that I can come up with something complete and presentable.

One difficulty that I faced that I could not solve was the problem with freesound.org API. I wish I had recorded the sound when the API worked because that I really liked how the freesound samples worked with my sound. gibber was created 5 years ago and after a while it started to have problems with the API. Charlie Roberts, the creator of the library, came up with a fix 2 years ago, but I assume freesound has changed its API policy and included a different authentication process since then.

Anyways, I had a lot of fun with this project and I wish to learn about other techniques of live coding so that I can build a solid knowledge and skills around this type of coding.

Code: https://www.openprocessing.org/sketch/546066

Recording: https://www.youtube.com/watchv=ubU9xgEJvzI&feature=youtu.be