Final project

While I do not know exactly what I want to do for my final project, I am leaning in the direction of something to do with AR. I was thinking of using the P5 dimensions library that i found last week to slightly tweak the 3d space my camera sees and make a sort of existential overlay. this would involve the use of the DOM library as well as the  video library for some aspects. Image result for how to create augmented reality content

snapchat sort of does this with there filters an I want to give it a try.

 

Final Project

For the final project, I’m thinking of making an interactive toy for my cats.

The main feature would be cats interact with the toy and the camera takes pictures and posts it on a Twitter account. I can check the Twitter account when I miss my cat.

I’ll be using the Arduino as the main board. A toy to attract the cat. Capacity sensor and motion sensor in the camera as the input and detect whether the cat is here. Camera sets on a servo motor to adjust the angle or position. ml5.js to test if the picture has a cat in it. Yun shield on Arduino to connect to Temboo in order to get Twitter API.

 

 

hardware:

Arduino, Capacity sensor(cooper tape), camera with built-in motion sensor, servo motor, yun shield(?), wood box

 

cat face recognition:

https://ml5js.github.io/docs/simple-image-classification-example.html

p5.bot to communicate with Arduino:

https://github.com/sarahgp/p5bots

 

Final Project Inspiration

For my final project, I’m thinking of creating an interactive sketch where the user creates their own solar system.Using a drop down menu, someone can choose the amount of planets that are in the sky and then there would be a slider to control the color of a certain amount of planets. I plan to use the DOM library to make it easier for the user to build up and destroy the solar system. It would also be interesting to have a button that represents a black hole that destroys that the solar system. I think the most challenging part will be allowing the user to control the amount of planets and then generating them to interact and rotate as a solar system. I think the initial sketch will have a rotating sun in the center. I don’t know if I’m going to work in 3D; I will first try to get the basics of the sketch down before I attempt to go into 3D.

I really like the sketch linked. It is not interactive, but I liked how they formatted the solar system so that it looks like the planets are moving, but they don’t have the normal movement look that most solar system sketches have. This could be an approach that I could try rather than have something like: https://www.openprocessing.org/sketch/386189. Ideally, I would like to combine the two designs and then allow the user to interact with the sketch.

Processing Library – Box2D

Hello everyone,

I hope you are all doing well.

I am posting in regards to a library that I found on the processing website that you guys might find useful if you choose to work with processing.

The library is called Box2D and physics is the main priority of the library. This library gives the user the ability to work with statics, dynamics, vectors, velocities and so much more. I found it interesting because I know some of you may want to work with video games so physics is a really important aspect of making the game realistic. I plan to use it for my final project and I hope you guys use it as well.

P.S:  The coding guru Daniel Shiffman of The Coding Train made this library.

Link to source code:

https://github.com/shiffman/Box2D-for-Processing

Open Cv

At first, I really wanted to use the open CV library in processing. I saw that it had a lot of features mainly for using the camera to detect motion or even faces. The library has many features I didn’t know could just be added to a regular webcam. But I realized that Open Cv was more complex I hope to learn more about it next week and put it in next weeks assignment. For this week instead, I used the regular Video library in processing to create what looks like artwork. By artwork I mean it looks like paint stripes on top of you as you look at yourself live. That is why I called it live art.

open processing link : https://www.openprocessing.org/sketch/532625

Processing Library

I would love to work on Loom. Loom is patterns linked to audiovisuals in some sort of way. As a kid, I’ve always been interested in websites that link patterns to sound. There was a website I visited that turned squares that you make white on a canvas, and each square would produce a sound and it would play sequentially as the canvas progresses towards the right. It was so simple, but It was very easy to get lost in. I want people to associate music with geometry. Music is very mathematical, but we enjoy the math subconsciously. Our brain makes sense of these patterns and systems into effortless flows and rhythms that sound pleasing to us. It would be amazing If I could visualize sound in an interesting and time-grabbing way.

library

The library that I wanted to look further into was the dimensions library. This library seems to look at the vector workings of p5js  functions and re works them to be able to work in dimensions that they normally would not. I am thinking of using this library to make some pretty wacky video playback world that the user could interactively move through if possible.

link to the library: https://github.com/Smilebags/p5.dimensions.js

Video and Images

For my project I tried to mess around with an excerpt from one of the films I did for class during my sophomore year that was about alien abduction. At first I thought about using pixel manipulation; but realised that given that the footage was shot in black and white – there wasn’t a whole lot I could do in isolating and manipulating different colour channels. Instead I tried to look for a more collaged approach – utilising different filters and blending tools to create a kind of tripped out version of the abduction. The original video can be seen to the left of the Canvas.

Sketch: https://www.openprocessing.org/sketch/532249#

An issue I keep running into however is trying to distinguish the different layers from each other – as the “push , pop” method seems to have varying results, often still grouping layers together under the same filter.

 

Library

The Seriously.js library seems pretty cool. I used to be a big movie gu (until college) and I was always amazed about how they do the effects of. I knew there was something called an editing bay where the producers and editors would lock themselves up for months to edit a hours of footage and condense it to 1 to 3 hours. They probably use some really expensive and high tech software to edit these videos which I don’t know have any experience in. But I have experience in coding and this library allows you to add effects to live or prerecorded videos through code.