Final Project

For my final sketch, I wanted to create a music visualizer that the audience could also add sounds to. As the project progressed, I got carried away making the visuals and due to the lack of time I opted out of preloading sounds that the audience could activate by pushing the keys on the keyboard. However, that is something I would like to explore in the future, especially in a way that would have the audience add something that doesn’t sound “bad” to the music.

My final sketch consists of a title screen window with “stars” dotting the screen and a button that starts the song. When the song starts, it turns on the computer’s camera and the subsequent visualizations would consist of video manipulation or added elements to the screen in time with the song. And so, for this project I used p5.js’ DOM and sound libraries. The visualizations includes changing the video’s color, adding lyrics, adding a “static” effect, adding a “glitch” effect, adding a “lag” effect, adding a zoom effect, adding a pixel effect, and adding a pixel/grayscale/moving effect.  Unfortunately, I was ultimately unable to connect all the pieces of my project into one sketch; I had trouble connecting the video manipulations but I was able to successfully include the added elements like color, static, and lyrics. While I am disappointed I was unable to fully realize my project, I’m glad that I was able to successfully create each video effect individually and independent of one another. I really enjoyed creating these sketches and was able to explore video manipulation in depth and in creative ways; in the future, I would definitely like to be able to connect all of the sketches together into one piece and to also continue to create sketches alongside music and further explore the different visualizations I can make!

Link to final sketch: https://www.openprocessing.org/sketch/546049

Link to individual video effects: https://www.openprocessing.org/sketch/545819 (tint), https://www.openprocessing.org/sketch/545846 (lyrics), https://www.openprocessing.org/sketch/545911 (zoom), https://www.openprocessing.org/sketch/545914 (lag), https://www.openprocessing.org/sketch/543849 (pixelates), https://www.openprocessing.org/sketch/545986 (glitch), https://www.openprocessing.org/sketch/546013 (static), https://www.openprocessing.org/sketch/546018 (movement)

Final Project Update

For my final project, I have a title scene before entering the music visualizer. I’m using scene manager to switch between these two screen, and it worked until I started coding more complex elements in my music visualizer scene. Once I write code in the music visualizer scene, sceneManager “breaks” altogether. In my music visualizer scene, I’m using the computer’s camera to record the viewer. I’ve written a few elements to alter the viewer’s appearance through the camera/pixels, but I’m having trouble getting them to work alongside sceneManager and potentially the song itself, but I’m not sure yet because sceneManager breaks before I can see what happens on screen. I can add/change basic elements, such as the background color altering or objects moving in time to the music, but manipulating the camera’s footage seems to be giving me the most problems. I also have yet to add sound that the audience can interact with, but I plan to preload sounds into the keys on the computer so the audience can press them to produce sounds.

Final Project

For my final project, I will be using p5.js’s sound library to create a music video visualizer. The song I chose for this project is superorganism’s “Reflections on the Screen” because lyrically, it’s refers to the internet and technology, which I thought aligned pretty well with coding. The visualization, which will complement the music’s rhythm, will feature pixel manipulation similar to Reas as well as elements/graphics popping up on the screen. I will be using the camera to show the audience’s “reflections on the screen.” I also chose this song because it’s relatively bare with minimum sampling of other sounds (at least compared to their other songs). And so, it leaves room for the audience to add their own sounds through the keyboard, which I plan to preload into the code. If I can, I’d like the sounds to be on beat (even when pressed off-beat), and I’d like them to harmonize with the song so that it doesn’t clash with it. Essentially, I want the keyboard to act as an instrument like a Midi Fighter with preloaded samples. Overall, I want the piece to have an OK Go/Adult Swim off the air/superorganism vibe. Audience interaction is also super important because I want them to be a part of the music, as people shouldn’t feel like they can’t do it/play it.

 

Final Project Inspo

I’m not exactly sure what I want to do for my final project, but I would love to work with music. I really liked the presentations on Rob Clouth and Casey Reas, and I think it would be really cool to do some kind of music video/coding combination, where the code will interact with the music and create some kind of visualization on the screen. I’m not sure what form the visualization will take on; it could be similar to Clouth where he’d make shapes and designs on screen, or similar to Reas where he alters the video footage itself, or even both. My other worry is that I possibly also want some kind of interactivity on the audience’s part instead of them just watching the screen. I think this interactivity will add another dimension to the piece, as it’s very easy to walk past/click away from a video with visualizations otherwise. Maybe the audience will make the music?

This sketch inspired me regarding interactivity:

https://www.openprocessing.org/sketch/418569

Reas’ music video inspired me regarding video manipulation:

Library

Ever since my midterm project, I wanted to incorporate sound into my sketches. Especially after seeing many presentations in class demonstrating the intersection of music and digital design, I’d really like to possibly explore this area as well. And so after scrolling through the p5.js website, I’m thinking about using waveforms to supplement music through this library: https://p5js.org/reference/#/p5.FFT or maybe similar to previous week’s research post with BlokDust, I’d like to possibly make an interactive soundscape through: https://p5js.org/reference/#/p5.Part or https://p5js.org/reference/#/p5.Phrase

Research Project — Daniel Rozin

Daniel Rozin is an Israeli-American artist based in New York. He studied industrial design at the Bezalei Academy of Art and Design in Jerusalem, before entering the Interactive Telecommunications Program at NYU 10 years later. In this program, Rozin learned how to be creative with technology by means of programming and electronics. From technology, Rozin found the creativity to be an artist and he now works in the field of interactive digital art.

Rozin’s work is primarily composed of installations and sculptures that respond to the presence of a viewer. He uses various mediums to create art, from pure software to electronics to static and kinetic sculpture. Oftentimes, the viewer becomes the contents of the piece, as Rozin explained in an interview with Leaders in Software and Art, “The artist creates the premise and the parameters of interaction, the artist’s responsibility is to imagine almost all possible interactions and see that those would yield an acceptable result. It is important for the interactive artist to leave a big chunk of the piece open to interactivity so that the viewer can really change the piece and feel ownership over it.”

The piece above is from Rozin’s “Mechanical Mirrors” series. In this particular piece, he explores the intersection of soft materials and mechanics, but the series uses various materials to act as the mirrors. According to Co.Design, Rozin creates these mirrors by using custom-built software written in C++ that translates data from a camera into simplified pixels, which play across the face of his sculptures in near real time. Interestingly, none of the technology he uses in this series is in the viewer’s line of sight.

I especially like this series, because I think it playfully accomplishes Rozin’s mission of closing the gap between technology and humans, as he said in an interview, “Nowadays we are exposed to a lot of technological wizardry and don’t think twice about it, in fact we have given up on trying to understand it…I try to make technological devices that are simple to understand and rely on our intuition rather than defy it.”

In other works, Rozin continues to explore mirror concepts, since he stated in numerous interviews that his main interest in his art is to explore the way we view the world and create images in our mind; mirrors seem to exemplify this concept.

His website

His Vimeo

Data

I think one of the hardest parts of this assignment was getting access to the APIs; either I didn’t have a key, or there was some linkage problems going on in within my sketch. For example, sometimes when I linked an API, my sketch would be stuck on “Loading…”. Because I had to cycle through a lot of different APIs, I had to come up with different ideas of sketches, but I finally settled on getting an API from google books. I wanted it so that the title of the books linked would appear. Their font size would be determined by how many pages they have, and their font color would be determined by their average ratings. However, I am having problems with creating functions and linking it to the API, and so none of my elements in the draw function work.

Learning Processing reflection

I think chapter 10 of the reading really aligned with our midterm projects. For me, since I really wanted to work with classes, classes naturally had me break down my project in the way that chapter 10 advised. And so for each of my elements/functions, I would make a separate sketch and write that one element/function before copy and pasting it into the final sketch. I think this is a good way to organize everything, because it’s easy to write messy code and to get lost in it (especially when I have to refer to it again hours/days later). The only problem that would then arise would be getting everything to connect and work as one sketch. For example, when I worked on each class individually in another, I automatically set the background to another color and so when I compiled all of them into one sketch, my background wouldn’t change colors like I wanted it to because all the classes’ set backgrounds would inhibit it to. From chapter 11, I think I should definitely take more breaks, since it code can be really frustrating. Additionally, I think I should reach out for more help (whether it be another coder or office hours), since it’ll help me learn code and fix it if I talk about what I’m doing. Also it might be worthwhile to utilize the println() function more. I’ve never used it, but for more complicated sketches, it might be helpful to learn it and use it.

Midterm pt. 2

At the very least, I wanted to make a rain cloud with lightning, which I think I was successful in. I remember initially setting out to make some sort of landscape so that I could become more comfortable with making classes, but I used a few classes without it anyways. I remember also wanting to include a sound/audio element to the piece so that when clicked, thunder would sound as well as lightning would appear. However, it was taking me some time to figure it out, so I dropped it. Arrays were my biggest challenge with this project and so I binged a lot of Shiffman videos/tutorials. I had the hardest time linking arrays successfully between classes and the original sketch, and just overall understanding the concept of arrays. After spending a few hours trying to figure out how to incorporate rain with an array, Shiffman actually had a great tutorial on it; it helped me make my rain more detailed by giving it an illusion of depth. His tutorials series on arrays also helped with creating my other “wisp” clouds. I also went through a few different iterations of lightning, before deciding to work with a line rather than a shape, as it might’ve be the easier/more concise option. If I were to do this project differently, I would take the time to actually add sound since I think it could add a lot to the sketch’s concept. Also, after looking through some of my classmates’ pieces, I might also want to make it more interactive, maybe by having a person with an umbrella and the rain ceasing to fall when it hits the umbrella.

Link to sketch: https://www.openprocessing.org/sketch/516567

Midterm 1

For my project, I would like to make a rain cloud that is perpetually raining. Ideally, I’d like to have the composition so that when clicked, the background flashes like a lightening storm. Additionally, lightning will flash out of the clouds. The colors will be mostly gray and black, but the background will alternate quickly between white and black to mimic lightning. I might also add sounds for the thunder, if we delve deeper into that during class. I haven’t decided yet if I would like the precipitation to just be rain, or if I will include other forms of weather like snow or possibly something even grander like a hurricane. I might add more elements to the landscape like a tree for example, since I would like to learn how to use classes in my code to generate/organize elements in my sketch. I would love to get more familiar with using loops, and would probably incorporate that to construct my rain drops iterations.

link to sketch: https://www.openprocessing.org/sketch/513122