Final Project Documentation

For my final project, I have created a code for 5 min live coding performance. Initially I wanted to simply make a visualization of a song that I like but as I progressed and learned more about gibber library, I started to make my own sound. The interactive experience that my project provides is a little different from the interactive sketches that I’ve created previously, since the audience is just watching me interact with my code. However, I kind of fell in love with this form of performance and live coding and I would like to explore further into this field in the future.

My code is written in the order of execution rather than being organized by each object or sound element. This means a code for manipulating the element of the variable ‘b’  might be written in between ‘e’ and ‘f’. Also, each body of code divided by the spacings is meant to be highlighted all together and executed all at once.

The main idea behind the progression of this audiovisual performance is that it starts off with a stable and rhythmical audio and visual pattern and as the code progresses the pattern loses its stability and goes into a complete chaos until a voice calls out for help. The rhythm of drums, plucks, and bass from the beginning matches (kind of) and putting them in a melody (D, E, Bb, A) makes the sound stick to your head. Then, I introduce a sad man’s voice with a white noise, indicating a transition. The sounds and visuals introduced after this are less rhythmical and more dreamy and formless. On top of these dreamy synth sounds, I start to add distortions and panning to break the sound even further. The visual is pixellated and modified with multiple filters/shades to reflect the sound. Although many components of this performance are organized and planned, several parts, such as notes and rhythms for each instrument/sound, were randomized so while practicing the performance, I would like one performance better than another one because it sounds better. I ran into many cases where the plucking (has a random rhythm) did not quite match the drums, so I would panic from the beginning of the code but I would just figure it out by changing the rhythm or just moving on.

The work process had three parts: 1) putting together the audio element. 2) making each section of visualization. 3) Integrating them all together and deciding the order of the code for the performance. Part 1 was fun and only required me to decide on instruments that I want, notes, and rhythm and put them together so that the whole thing sounds good to me. It started to get slightly more challenging with from part 2 because the visual elements should reflect on the audio that I’ve created. One thing that I learned during this process is that a good sound visualization does not always reflect the master output. For example, the part where I introduce the robotic voice, I made the visual reflect the voice output rather than the master output because that makes the visualization more dynamic and look authentic to the audience. The last part was about organizing everything so that I can come up with something complete and presentable.

One difficulty that I faced that I could not solve was the problem with freesound.org API. I wish I had recorded the sound when the API worked because that I really liked how the freesound samples worked with my sound. gibber was created 5 years ago and after a while it started to have problems with the API. Charlie Roberts, the creator of the library, came up with a fix 2 years ago, but I assume freesound has changed its API policy and included a different authentication process since then.

Anyways, I had a lot of fun with this project and I wish to learn about other techniques of live coding so that I can build a solid knowledge and skills around this type of coding.

Code: https://www.openprocessing.org/sketch/546066

Recording: https://www.youtube.com/watchv=ubU9xgEJvzI&feature=youtu.be

Final Project (4/27)

So far, I have almost all elements, both audio and visual, that I will be including in my final piece. I was worried about composing a collection of sound (I don’t want to call it music) from scratch because I don’t have any experience in electronic music, audio engineering or composing. gibber has many presets that are based on electronic music terminology and numbers, but I was able to understand them enough to come up with something that I envisioned: a sad dreamy sound with psychedelic visual. Tomorrow in class, I will only be able to show half of the whole project, because now my biggest task is to integrate all the components together and this is what I have so far in terms of putting things together. Since this code best works when it is executed line by line, I will have to decide what comes after another and what turns off and on at certain points during the live coding performance. I have to integrate 4 other audio components that include samples from Freesound API, a major visual transition, and an ending.

Final Project PART 1

Last week, in class, I said I want to use sound.js and gibber.js, but through further research, I realized that I should focus on learning gibber and stick to its web editor. gibber is not just a library for sound composition, analysis, and visualization but it is also a live coding environment. This allows live audiovisual performance with extra transparency because the audience can see you typing down your code.

My goal is to compose some sound that includes drums, synth, audio files from freesound.org (the library includes the API), and voice. Then, I will create a visual with a couple of elements, each of them connected to the drum output or the master output. I am going to use both 2D and 3D objects because in gibber when you combine 2D and 3D, the 3D serves as a shade for the 2D object. I have created the basic outline of what my project might sound like and //commented on things that I need to work on.

Here is a screen recording of a very simple live coding.

live_demo

I am not sure if I will be doing a live coding for the final presentation (I hope I can) but I am definitely planning to make a screen recording of it.

gibber web editor: https://gibber.cc/

Access my code from here: https://www.openprocessing.org/sketch/

While I was writing this my code stopped working. I have no idea why. It worked 5 minutes ago. Now my excitment has turned into a doubt.

Final Project

For the final project, I want to create a sound visualization. I have been watching many videos of sound visualization done through Processing since earlier this semester and now I am confident enough to give it a try. Since I am not a music person, I will stick to visualizing frequency and amplitude of the sound, but this thought might change after further research. Sound visualization and analysis can be done through p5 sound library but gibber.js also seems like an awesome library to play around with. With gibber I can make drumbeat of my own and make a visualization of that.

Here is an introduction video of gibber. As you can see, you can easily connect sound with 2D or 3D objects.

I really like the video above because it has the aesthetics that I might want to try. I am not sure about other elements that are floating around, but a visualization with a circular center is similar to what I want to create.

Research Project – HPSCHD

(John Cage and Lejaren Hiller working on HPSCHD)

HPSCHD by John Cage, a composer, and Lejaren Hiller, a pioneer in computer music, is one of the wildest musical compositions in the 20th century. Its first performance in 1969 at the Assembly Hall of the University of Illinois at Urbana-Champaign included 7 harpsichord performers, 7 pre-amplifiers, 208 computer-generated tapes, 52 projectors, 64 slide projectors with 6400 slides, 8 movie projectors with 40 movies and lasted for about 5 hours.

(First performance)

Before explaining further, imagine listening to this for five hours.

When I first listened to the piece at MoMA, it felt like a devil was speaking to be, but the composition is actually one of the early examples of randomly generated computer music.

HPSCHD (a contraction of the word ‘harpsichord’ into a computer language) was created in celebration of the centenary of the University of Illinois at Urbana-Champaign in 1967. The prerequisite of the work was to involve the computer in one way or another. Cage did not want the computer to serve simply as an automatic machine that makes his work easier, but he envisioned a process of composition in which the computer becomes an indispensable part.

The composition involves up to 7 harpsichord performers and 51 magnetic tapes that are pre-recorded with digital synthesis, which manipulates the pitches and durations of sounds in pieces by Mozart, Chopin, Beethoven, and Schoenberg. The music for a harpsichord is generated by the computer for each performers using Illiac II computer and two software created in Fortran computer language, DICEGAME, and HPSCHD.

DICEGAME is a subroutine designed to compose music for the seven harpsichords. It uses a random procedure created by Mozart called Dice Game which generates random music by selecting the pre-formed musical elements with dices.

The second software, HPSCHD, is responsible for the sounds that are recorded in the tapes. The program synthesized sounds with harmonics that are similar to those of a harpsichord. It used a random procedure from I-Ching or Book of Changes, an ancient Chinese divination text. It divided the octave into 5 ~ 56 parts and calculated each for the 64 choices of I-Ching procedure. I don’t completely understand how it works but that apparently allows 885,000 different pitches to be generated.

Each performance of HPSCHD is supposed to be different, due to randomly generated sound from the tapes and performers, the different number of tapes and performers, and different arrangement of all these parts. A performance can play all of the sounds at once, individually, or somewhere in between. The recorded version above is just one of the infinite number of variations.

Further research:

https://www.jstor.org/stable/3051496?seq=1#page_scan_tab_contents

(An academic journal from University of Illinois Press. You can use your NYU ID to access)

https://www.wnyc.org/story/john-cage-and-lejaren-hiller-hpschd/

(Interesting podcast on HPSCHD)

p5.js Library

p5.gibber

I found this library interesting because before the semester ends I want to create some kind of sound visualization. Looking at the examples of this library, I think it can not only be used in sound visualization sketch but also create simple synthesizer with p5.js. It seems like there is a separate function for controlling frequency as well, allowing me to create an interactive sketch that changes the sound. There is a whole online manual dedicated to this library so I have to spend some time figuring out how things work.

p5.js Data

My sketch is based on Studio Ghibli API, which is one of the public APIs from the github page. I also looked into few other APIs under the topic of games (Rick and Morty and Amiibos), but I had difficulty loading the JSON from these APIs. At first I thought this was because different APIs need different call functions(?). The descriptions of the APIs talked about GET HTTP or REST calls and etc. that I couldn’t quite understand. All of the APIs that I was interested in did not load at first, so I found httpGet() from p5.js reference page and used this example to successfully load the Ghibli API (other datas did not work with this), although I don’t understand the syntax.

My sketch isn’t anything cool with animated shapes, but it pulls title, description and rt_score from the data and allows the user to look through different Ghibli films with a mouse click (it would have been nice if the data included image urls as well).

Ghibli Sketch

Today, after Alex’s slack message, I tried using console.log() instead of prinln() and Rick and Morty data started to work. However, now I am faced with another problem of not being able to load the image from this data. If I preload a specific image url from the data it works, but I cannot figure out how to load the image after loading the data.

Rick and Morty sketch attempt

Chapter 10 and 11

Reading these chapters, I began to think about how I might want to practice planning out my code more systematically. Until the mid-term sketch, I would just throw in a bunch of ideas into a long set of main code and organize them as I am revising the code. This method worked for me because I never start my code with a clear idea of what I want to create, probably because I am still getting myself used to Processing. However, I think I should start getting in the habit of beginning object-oriented sketches with multiple classes rather than a single tab of codes since that will help me in the future when I write more complicated code.

The debugging chapter was interesting, especially the use of println(). Separating the code into smaller sections, commenting out, and testing with a new sketch are steps that I naturally followed whenever I was faced with a problem. Indicating the location, color, array and etc. is something that I should keep in mind when some objects do not appear on the sketch. Previously, when this happened, I would just copy the code for the specific object on to a new sketch and play around with it until it works.

Midterm Part 2

During the process of creating my sketch, the most successful part was discovering a way to produce a complicated object that is interesting even by itself. Also, placing rotate functions in different parts of the code ended up with a result that I didn’t foresee. There were several surprises like this throughout my process and I learned a lot from them. Overall, my original idea of making a spaceship made out of thousands of squares worked out well. I was able to add two more orbits by incorporating a separate class for the circular object, and add a starfield background by using an array. I was faced with few problems with the rotate and scale function when I started to work with more classes, but I somehow figured out ways around them. This project taught me that things work differently in Processing and Openprocessing. For example, in Processing some interactions with the keyPressed function will work only once and mess up other key interactions, while in Openprocessibg everything works as I intended. Also, the mouseWheel function works so well in Processing, but it won’t respond at all in Openprocessing. Those are the two problems that one can find in my final sketch. Finally, I wanted to add an interaction where I can add the orbits by pressing ENTER, but that never made it to my sketch because the object is not a simple eclipse but a group of recurring object that is also rotating.

https://www.openprocessing.org/sketch/515679

Midterm Part 1

My midterm project will be a sketch of a square universe. While looking for inspirations, I came across a tutorial about recursion, which is putting a function within the same function to achieve something like a loop but different. After following the tutorial I played around with it and came up with a really cool spaceship looking diamond object made up of millions of squares. Then I rotated the whole recursion and used the scale to zoom into the object to reveal a universe of squares.

At this point, I’ve used variables, functions, scale, rotate, conditional, and interaction in my sketch.

This part of the project asked to incorporate classes into the sketch, but I had difficulty executing recursion with a class. I need to figure out how to do this for part 2, because I have two objects moving separately, and I would like to organize them in classes.

I have a pretty solid vision for this project. First, the sketch starts off with a close up of rectangular recursion, which looks like an ancient pattern of some sort. Then with the arrow key people will be able to zoom out to reveal the diamond shape and orbiting circles. However, once people reach a certain point of scale, the diamond breaks apart into a galaxy of squares. Then the sketch zooms into the galaxy.

My initial sketch has some minor problems with the interaction, but the most urgent issue for next week will be incorporating classes.

Inspiration: https://www.youtube.com/watch?v=s3Facu6ZVeA&t=4s

*When I was uploading the sketch on OpenProcessing, it worked initially, but after saving it does not show anything because of an unexpected identifier. The program still works on Processing.

https://www.openprocessing.org/sketch/512849

Here is a screen recording of my sketch for now.