Research Project – HPSCHD

(John Cage and Lejaren Hiller working on HPSCHD)

HPSCHD by John Cage, a composer, and Lejaren Hiller, a pioneer in computer music, is one of the wildest musical compositions in the 20th century. Its first performance in 1969 at the Assembly Hall of the University of Illinois at Urbana-Champaign included 7 harpsichord performers, 7 pre-amplifiers, 208 computer-generated tapes, 52 projectors, 64 slide projectors with 6400 slides, 8 movie projectors with 40 movies and lasted for about 5 hours.

(First performance)

Before explaining further, imagine listening to this for five hours.

When I first listened to the piece at MoMA, it felt like a devil was speaking to be, but the composition is actually one of the early examples of randomly generated computer music.

HPSCHD (a contraction of the word ‘harpsichord’ into a computer language) was created in celebration of the centenary of the University of Illinois at Urbana-Champaign in 1967. The prerequisite of the work was to involve the computer in one way or another. Cage did not want the computer to serve simply as an automatic machine that makes his work easier, but he envisioned a process of composition in which the computer becomes an indispensable part.

The composition involves up to 7 harpsichord performers and 51 magnetic tapes that are pre-recorded with digital synthesis, which manipulates the pitches and durations of sounds in pieces by Mozart, Chopin, Beethoven, and Schoenberg. The music for a harpsichord is generated by the computer for each performers using Illiac II computer and two software created in Fortran computer language, DICEGAME, and HPSCHD.

DICEGAME is a subroutine designed to compose music for the seven harpsichords. It uses a random procedure created by Mozart called Dice Game which generates random music by selecting the pre-formed musical elements with dices.

The second software, HPSCHD, is responsible for the sounds that are recorded in the tapes. The program synthesized sounds with harmonics that are similar to those of a harpsichord. It used a random procedure from I-Ching or Book of Changes, an ancient Chinese divination text. It divided the octave into 5 ~ 56 parts and calculated each for the 64 choices of I-Ching procedure. I don’t completely understand how it works but that apparently allows 885,000 different pitches to be generated.

Each performance of HPSCHD is supposed to be different, due to randomly generated sound from the tapes and performers, the different number of tapes and performers, and different arrangement of all these parts. A performance can play all of the sounds at once, individually, or somewhere in between. The recorded version above is just one of the infinite number of variations.

Further research:

https://www.jstor.org/stable/3051496?seq=1#page_scan_tab_contents

(An academic journal from University of Illinois Press. You can use your NYU ID to access)

https://www.wnyc.org/story/john-cage-and-lejaren-hiller-hpschd/

(Interesting podcast on HPSCHD)

Research Project- Rob Clouth

Rob Clouth is an electronic musician, sound designer, and new media artist based in Barcelona. Clouth makes a mix between techno music and IDM (intelligent dance music).

He uses various forms of programming to create his music. He uses sound painting, which means he sculpts sounds by painting their spectrums using a digitizer. I think this is cool, because it’s almost like reverse sound-making, because usually a sound is represented by a sound spectrum, rather than the spectrum being created first.

Clouth also carries around different microphones, just in case he hears a sound he wants to use in his music. He uses deep-ear binaural mics, waterproofed contact mics (I guess for recording sounds underwater), and a coil mic that picks up electromagnetic fields of electronic devices. I think it’s cool how he carries microphones around, similar to how a photographer carries a camera and lenses.

The video above is a piece form Clouth called ‘Islands of Glass’. I personally love how the visuals interact with what is going on in his music. I’m not sure how the visuals were made for this video, but Clouth has recently created another piece called ‘Transition’, which is shown in the video below. In this piece, Couth generates the audio with an algorithm that he wrote to scan through his music collection in date-order. The algorithm takes little slices of audio from each track and stitches them together to form one continuous mix. I like this piece, because the visuals were inspired by the growth rings on trees. Clouth loves how trees encode their own history with these rings, as well as the history of its surroundings.

I love how Rob Clouth combines audio and visuals so well. All of his videos are mesmerizing, and the audio is very unique. You can check out some of his other pieces on his website:

http://www.robclouth.com/#home

 

 

 

Research Project — Daniel Rozin

Daniel Rozin is an Israeli-American artist based in New York. He studied industrial design at the Bezalei Academy of Art and Design in Jerusalem, before entering the Interactive Telecommunications Program at NYU 10 years later. In this program, Rozin learned how to be creative with technology by means of programming and electronics. From technology, Rozin found the creativity to be an artist and he now works in the field of interactive digital art.

Rozin’s work is primarily composed of installations and sculptures that respond to the presence of a viewer. He uses various mediums to create art, from pure software to electronics to static and kinetic sculpture. Oftentimes, the viewer becomes the contents of the piece, as Rozin explained in an interview with Leaders in Software and Art, “The artist creates the premise and the parameters of interaction, the artist’s responsibility is to imagine almost all possible interactions and see that those would yield an acceptable result. It is important for the interactive artist to leave a big chunk of the piece open to interactivity so that the viewer can really change the piece and feel ownership over it.”

The piece above is from Rozin’s “Mechanical Mirrors” series. In this particular piece, he explores the intersection of soft materials and mechanics, but the series uses various materials to act as the mirrors. According to Co.Design, Rozin creates these mirrors by using custom-built software written in C++ that translates data from a camera into simplified pixels, which play across the face of his sculptures in near real time. Interestingly, none of the technology he uses in this series is in the viewer’s line of sight.

I especially like this series, because I think it playfully accomplishes Rozin’s mission of closing the gap between technology and humans, as he said in an interview, “Nowadays we are exposed to a lot of technological wizardry and don’t think twice about it, in fact we have given up on trying to understand it…I try to make technological devices that are simple to understand and rely on our intuition rather than defy it.”

In other works, Rozin continues to explore mirror concepts, since he stated in numerous interviews that his main interest in his art is to explore the way we view the world and create images in our mind; mirrors seem to exemplify this concept.

His website

His Vimeo

Research Project- BlokDust

 

https://blokdust.com/

I’m always interested in the projects that combine the visual and audio effects together and give the audience a complete experience with different senses. And then I found BlokDust.

BlokDust is a web-based music making app. Users can build synthesizers, put effects on the voice, remix and manipulate samples and arrange self-playing music environment by connecting the blocks together.

BlokDust is created by Luke Twyman, Luke Philips, and Edward Silverton. Developed in Brighton UK and released in 2016.

The web itself is well designed. I really like the interface. It’s pretty clean and has a clear guide to help the users to get started. It does improve the experience of making music and give the users a better visualization of it.

Instead of just using play and stop, Blokdusk creates some new ways to play the music with the block interacted with each other.

Examples:

Playing with the keyboard:

https://blokdust.com/?c=N1V7mjxqW&t=Cello%20Sampler

self-playing:

https://blokdust.com/?c=VkF2_je5W&t=Rotational%20Sequencer

More about BlokDust:

https://guide.blokdust.com/

BlokDust uses the Web Audio API and make use of Tone.js as an audio framework. Here is the Github link:

https://github.com/BlokDust/BlokDust

 

 

 

Research project-TransHuman Collective

One of the coolest ideas to come into the media space in the 21st century is the use of augmented and virtual reality to add depth the act of storytelling through media. Much of video game culture has revolved around getting closer and closer to actually feeling like the game is real. Even some movies are starting to be shown in virtual reality to blow spectators away with the surreal feeling of being right in the middle of story.  Augmented reality is slightly different in that it takes the use of spaces and objects in real life as the foundation of the code and superimposes a desired overlay on top. This involves the use of special glasses and or a camera lens as a medium for the code and or subsequent art design to be displayed.Prototype AR Glasses (Photo: TIRIAS Research)

Now enter TransHuman Collective, a programming and design group run in headquartered in India who make Augmented reality and virtual reality pieces. THC is the brainchild of Soham Sarcar & Snehali Shah. With Bachelors in Visual Arts from Maharaja Sayaji Rao University – Faculty of Fine Arts, Baroda. Experience combined, they have handled more than 400 Brands across industries over the last 15 years.

 

Top: conference presentation using Augmented reality to involve the crowd

Bottom: Mumbai interactive installation

The majority of their work consists of creative pieces that are used to help promote or raise awareness of a brand or idea. They have worked with major brands like MTV to bring custom interactive media experiences to their client base.

I will be sure to have more examples to show when I present but for now here is THC’s website housing most of their published works:

http://transhuman.in/

 

Research Project: Alex Dragulescu – “Malwarez”

In his project “Malwarez”, Romanian Visual Artist, Designer and Programmer Alex Dragulescu creates a visual Encyclopedia of computer threats that include viruses, spyware, malware and other forms of menacing code.

business proposition for you involving a huge sum of money Archival inks, limited edition, available on photo paper and 100% cotton fine art paper Numbered and signed by the artist
signature-mutating Trojan Archival inks, limited edition, available on photo paper and 100% cotton fine art paper Numbered and signed by the artist

Dragulescu tracks elements of each entities’ disassembled code “API calls, memory addresses and subroutines”; after which the variables of frequency, density and grouping are mapped using an algorithm that generates a virtual 3D likeness for each different “species” of code. These “Artificial Organisms” thus become constructed from the components of the code they represent; creating a visual reflection that uses both the artist’s interpretation, and the direct source of inspiration. Additionally, Dragulescu directly cites the sources of each individual code/organism; giving a date and online address to the original malicious code that was broken down and analysed to fuel the 3D visualisation.

PWS_lineage the keylogger that stole your Lineage password Archival inks, limited edition, available on photo paper and 100% cotton fine art paper Numbered and signed by the artist

 

 

 

I found this project particularly interesting because it looks towards computing for the subject matter as well as the medium. As someone new to the world of coding and computation, I find the mysticism as something that sands as a barrier between those interested and those involved. However, in taking that and using the medium to communicate concepts in a creative way, Dragulescu demystifies the medium while also dismaying the idea that these concepts are in any way dry and uninteresting. The 3D models generate a wealth of questions and intrigue into what elements differentiate each “organism” from each other and made each of the resulting pieces so visually captivating and intriguing; with depth and detail that brings to life concepts that exist primarily and only in coding languages only known to those familiar with computing.

Research project- telescope controller.

Since i was a child, I used to watch night sky and constallation. It is one of my hobby. When i was in high school, I lived in upstate New York. The nights of upstate New York were always filled with bright stars. As a enthusiastic star-gazer, this was another blessing from heaven. For my presentation, I initially attempted to associate my fervor for star-gazing with coding; yet, when I dug more deeper into these fields, I rather found out a more interesting instrument: the telescope-controller.

 

Arduino is just an easy to program hardware and you can attach motor to it and control it with codes.

 

Arduino reads the Constellation map
And gathers latitude longitude, and time information and accordingly turns the motor to adjust the telescope and have it follow Constellation near by.

 

Top is the board for Arduino where you can attach motors and other sensors. Bottom is where you can program to receive input and create output through programming.

 

 

 

Research Project- 1993

As I was going through the research options on the website, I was having a hard time finding something that was simple enough for me to explain while also being a pretty cool subject to talk about. I don’t really like anything that has to do with really abstract art or something that “peaks into the soul”. I just find things like that to be uninteresting. So after a few minutes of going through the topics I found a topic that was simple and related to my favorite place on this planet, NYC.

Recalling 1993, is a project done by Droga5 and the New Museum, in 2013, to celebrate 20 years since 1993.  The project allowed people to call a certain number from any of 5000 payphones in Manhattan and the caller could hear the voices of people from 1993.  The voices people were hearing were adjusted to fit the neighborhood the caller was on down to even the street.

When we think of interacting with the past, mediums that come to mind are things like pictures, journals, videos, and maybe a voice recording.  Here we have an example of interacting with the past people of New York City through a payphone. These people would talk about many things, from regular conversation, to the crimes they were experiencing since 1993 was the second most violent year on record since 1990. Hearing someones voice about a past experience gave a more human touch because hearing a voice is a lot more human then just reading an article or an eye witness account.

In a video I watched one of the recordings was a man saying his name and that had just graduated NYU and was looking for work. It was kind of surreal to hear someone from 25 years ago actually speak and say something that I could see myself saying.

I will be presenting a power point in class for more information and video.

https://droga5.com/work/recalling-1993/

Droga5’s ‘Recalling 1993’ Project Turns NYC Pay Phones Into Geo-Located Time Capsules

Research Project – Adam Ferriss

Adam Ferriss is a digital artist based in Los Angeles; he studied photography at the Maryland Institute College of Art where he became interested in using code to manipulate his photos. Later he received his MFA from UCLA. He experiments with RGB tricolor separation, shader programs, and pixel sorting algorithms. When Ferriss first started to manipulate photos, he took black and white photos and added red, green, and blue filters. He started incorporating code by exploring and experimenting in processing; he studied using Daniel Shiffman’s Nature of Code and Learning Processing.

Ferriss creates these psychedelic and optical illusions using photoshop, Adobe After Effects, and putting the photo through an algorithm to distort the pixels. The technical tools he frequently uses are: openFrameworks, GLSL, and Javascript. In an interview with Software Development Times, Ferris explains how he alters the color and movement of the pixels: “I work a lot with noise functions, Perlin noise, or simplex noise. They’re ways to generate pseudo-randomness in color, like shaping form. It generates a seed pixel, and from that one seed pixel it looks out at its neighbors, and continuously expands so its neighbors will start expanding. It’s essentially like you’re growing an image or a crystal in the way it clumps itself together and generatively expands.”

In interviews, Ferriss talks about how he takes inspiration from around him. He finds code that is already out there (he describes searching around GitHub) and then tinkering with it. He changes variables and runs the code over and over again until he sees something he likes.

Before earning his MFA from UCLA, Ferriss ran the photo lab at Otis College of Art and Design.He has also worked with companies such as the New York Times, Google, and Nike and has been featured on Wired, the Creators Project, Fast Company and many others.

I think my favorite pieces are from his collection called “500 Years Away”; the visuals are futuristic and parallel what I imagine space to look like.

Ferris also explores interactive art; on his website he has shared pieces where the user can control what happens. For example: https://adamferriss.com/seeds/

His tumblr      His Vimeo

Research Project- Casey Reas

Casey calls himself an educator and an artist. As an educator, he is a professor at UCLA in the Department of Design Media Arts. He also co-created the Processing language with Ben Fry, which can be considered as one his biggest accomplishments.

He has a lot of accomplishments as an artist as well, having his work shown in various major museums like the Whitney Museum. Casey explores with various different forms of art, such as prints, installations, software.. etc.

This is one of his recent works, called Still Life (2016). This is a custom created software, that was then installed in a gallery in New York City. The images change throughout the time, and it gives off a very trippy yet peaceful vibe. To watch how this installation looks like when it’s active, here is a link to the video that was captured of the actual installation in the gallery: http://reas.com/still_life_rgb_av_a/
There are three different types in his Still Life series, one having sound, and the others being silent. I believe that two silent Still Life installations were projected next to each other, while the one with sound was projected independently, and larger, for emphasis.

This is one of his prints, called RGB-056-006-080-823-715. This is similar to his installation showed earlier, using different color dashes. I think the name of this work is very interesting, he probably has many different versions that are slightly different with different numbers in their names. There is something very similar to this on his website with colors being black and white.

This is one of his physical installations, that was showed in the gallery in New York. It is calledPrimitives (This Could be an Extraordinary Find). I personally enjoy this one a lot, it blinks and lights up in blue. This is the link to the video to see installation in action: http://reas.com/primitives/
Casey collaborated with Aranda\Lasch for this project, for help in creating the physical form of his works.
I find Casey’s works very fascinating, and I love how he explores with different forms of art.