Final post

I had begun to use the kinect contributed libraries and had gotten a rough understanding of some of the differences between three of them. For my final I planned on using the depth and motion tracking capabilities of the kinect to create an interactive display using scripted signals to create an “in hands “3d” object generate. Originally, my  plan was to use a 3d terrain like discussed in class when a person moves there hands into a certain threshold range and apart a certain distance the object would display. The idea is that the person in front of the camera would be moving the terrain with hand motions and the person at the computer would switch between which object is generated for the user to interact with. I was able to execute the first part in displaying the object, however, I didn’t have enough time to consider different terrains or objects to display. I believe my biggest challenge had to be the hardware as most of the library is pretty easy to understand and there isn’t really that much you have to alter in order to display something using the kinect. Its just that due to the subtle differences in the libraries themselves changing one kinect as I soon found out can either have a detrimental impact on how the code executes or that the kinect won’t be recognized at all effectively making all your hard work go to waste.

Shown above, i was able use the Kinect v2 library to get the concept for my other sketch done in time, however, i was unable to get a documented video of my code that I did for the original kinect (kinect 1414). I would like to move forward with this sketch looking more into the kinect v2 library and adding the open cv library in order to change the display space aswell. i think it also might be cool as the groundwork for an interactive display showcase for other peoples art and codes.

Here are my codes in openprocessing:

-in class display(kinect v2):

https://www.openprocessing.org/sketch/547264

-actual final( non functioning without kinect 1414):

https://www.openprocessing.org/sketch/547258

 

midterm reflection- late(read disclaimer)

So starting off reading chapters 10 and 11, I can immediately point out the fact that my coding flies in the face of the teachings of this chapter. I have very little in the way of organization and my coding is more of an  amalgamation of then an organized set of well thought out functions. I thick that part of this lies in the fact that I originally didn’t know how to do what I wanted and ended up with a lot of errors in my code and in trying to brute force my way past these problems I ended up coding very inefficiently. I recognized that I needed help and took action to receive it however had I read these chapters before hand I might have been able to understand and work through some of the problems that came up rather then just give up and seek tutoring.

 

DISCLAIMER: THIS IS LATE DUE TO UPLOADING IT ON THE WRONG WORDPRESS BLOG!!!

Final Update

Building off of the code examples, I have made a display for showing exactly when something enters the threshold of the kinect’s camera  and displays a dot the follows the group of pixels to show where the object will be displayed depending on how the user interacts with the camera. As of this point i have all the basic parts of the final idea. I wanted to try to get an understanding of how to display something on camera as its own scene. I will ask for some assistance with the kinect if there  are any available and for advice for any directions i should pivot into for this last week of coding in class on Friday.

Final Project-part1

I have begun to use the kinect contributed libraries and have gotten a rough understanding of some of the differences between three of them. For my final I plan on using the depth and motion tracking capabilities of the kinect version 2 to create an interactive display using scripted signals to create an “in hands” 3d object generate. I have plans to use a 3d terrain like discussed in class when a person moves there hands into a certain threshold range and apart a certain distance. The idea is that the person in front of the camera would be moving the terrain with hand motions and the person at the computer would switch between which object is generated for the user to interact with.

Final project

While I do not know exactly what I want to do for my final project, I am leaning in the direction of something to do with AR. I was thinking of using the P5 dimensions library that i found last week to slightly tweak the 3d space my camera sees and make a sort of existential overlay. this would involve the use of the DOM library as well as the  video library for some aspects. Image result for how to create augmented reality content

snapchat sort of does this with there filters an I want to give it a try.

 

library

The library that I wanted to look further into was the dimensions library. This library seems to look at the vector workings of p5js  functions and re works them to be able to work in dimensions that they normally would not. I am thinking of using this library to make some pretty wacky video playback world that the user could interactively move through if possible.

link to the library: https://github.com/Smilebags/p5.dimensions.js

Research project-TransHuman Collective

One of the coolest ideas to come into the media space in the 21st century is the use of augmented and virtual reality to add depth the act of storytelling through media. Much of video game culture has revolved around getting closer and closer to actually feeling like the game is real. Even some movies are starting to be shown in virtual reality to blow spectators away with the surreal feeling of being right in the middle of story.  Augmented reality is slightly different in that it takes the use of spaces and objects in real life as the foundation of the code and superimposes a desired overlay on top. This involves the use of special glasses and or a camera lens as a medium for the code and or subsequent art design to be displayed.Prototype AR Glasses (Photo: TIRIAS Research)

Now enter TransHuman Collective, a programming and design group run in headquartered in India who make Augmented reality and virtual reality pieces. THC is the brainchild of Soham Sarcar & Snehali Shah. With Bachelors in Visual Arts from Maharaja Sayaji Rao University – Faculty of Fine Arts, Baroda. Experience combined, they have handled more than 400 Brands across industries over the last 15 years.

 

Top: conference presentation using Augmented reality to involve the crowd

Bottom: Mumbai interactive installation

The majority of their work consists of creative pieces that are used to help promote or raise awareness of a brand or idea. They have worked with major brands like MTV to bring custom interactive media experiences to their client base.

I will be sure to have more examples to show when I present but for now here is THC’s website housing most of their published works:

http://transhuman.in/

 

Final Matrix

For this I wanted to create an effect of a scene from the matrix movie. The goal was to simulate an event horizon as you enter the matrix. To do so there was a need to create an array list to get, set, and add new values to replicate particles moving in certain directions to show a navigation effect on keyPressed. The most trouble I had was mapping functions for the mouse movements and FOR loops for the navigation to get the effect to work. I spent a lot of time in the tutoring center trying to understand the coding I was trying to do and in the end I ended up brute forcing my way to making the code work with the help of a tutor.

https://www.openprocessing.org/sketch/516427

midterm part 1

My goal is to create a unique matrix rain simulation. I have started by simulating the effect of the matrix rain using classes and random number and character creation inside of a loop the give the effect of letters falling down the screen. I still need to iron out the fading of the letters and make the “rain” more concrete but i have the basic form down. However I want to add a interaction to simulate the start of entering the matrix. it would look as if the letters have frozen and then a zoom in effect like warp drive from star wars would occur. I still do not know how to make this happen so if anyone had an idea as to how to do this it would be greatly appreciated.

https://www.openprocessing.org/sketch/513316

animation

https://www.openprocessing.org/sketch/509951

The idea was to emulate the infinite road illusion with code. First I tried making an array of lines with varying stroke weight to get a reference for how the code would look without animation in place. I then tried to replicate the image using a for loop and line function which for the life of me I could not get to work due to its tendency to just fill the space made . I then also wanted to vary the horizon details a bit so I used a bezier function instead to create a bit more random behavior. The hardest part from there was aligning the overlapping curves to make straight lines so it became a trial and error game with inputting numbers but in the end I got a result similar to what I wanted.