Final project- cat tweetbot

To achieve my project, I need to make capacitive sensor connects to Arduino IDE, then to processing and processing to twitter. There are two libraries to support my code: processing.serial and twitter4j (http://twitter4j.org/en/index.html)

The main function of my project is to get the sensor controls when to send twitter, so I started with setting up Twitter API. I tested with using keypressed to Tweet.

Then I tried to Tweet with a button. Here is the part I got problem with. I made Arduino IDE send a string to processing “YES” everytime the button is pressed. Processing can receive and println in the console but cannot tweet with this command. (I can’t make this work) I think this problem may be caused by the type of data. (It works when I get int values from the capacitive sensor)

After that, I wired up the capacitive sensor. I did the test capacitive sensor first and it works perfectly. But the second time, it wouldn’t show any data in the Arduino IDE. I tried the exact same code and wiring as before but it didn’t work. I changed the resistor, wiring, and breadboard. Luckily it worked in the end.

Paint the box:

 

Next step:

To improve my project, one of the most important parts is to set a limit/constraint to the data received by Arduino. The value now in the processing is a little bit hard to control. Next, I would use the image taken by the webcam to upload on Twitter. I always want to make a stand-alone device and I figure out I can use Xbee to let Arduino connects to processing wirelessly.

 

code: https://github.com/yueningbai/final_cattoy

test video:
https://vimeo.com/268033313

 

 

Final Project

For the final project, I’m thinking of making an interactive toy for my cats.

The main feature would be cats interact with the toy and the camera takes pictures and posts it on a Twitter account. I can check the Twitter account when I miss my cat.

I’ll be using the Arduino as the main board. A toy to attract the cat. Capacity sensor and motion sensor in the camera as the input and detect whether the cat is here. Camera sets on a servo motor to adjust the angle or position. ml5.js to test if the picture has a cat in it. Yun shield on Arduino to connect to Temboo in order to get Twitter API.

 

 

hardware:

Arduino, Capacity sensor(cooper tape), camera with built-in motion sensor, servo motor, yun shield(?), wood box

 

cat face recognition:

https://ml5js.github.io/docs/simple-image-classification-example.html

p5.bot to communicate with Arduino:

https://github.com/sarahgp/p5bots

 

Libraries

https://p5js.org/reference/#/libraries/p5.sound

http://ability.nyu.edu/p5.js-speech/

I’m interested in visualizing the sound. P5 sound library let users detect the volume and the frequency of the sound.

Research Project- BlokDust

 

https://blokdust.com/

I’m always interested in the projects that combine the visual and audio effects together and give the audience a complete experience with different senses. And then I found BlokDust.

BlokDust is a web-based music making app. Users can build synthesizers, put effects on the voice, remix and manipulate samples and arrange self-playing music environment by connecting the blocks together.

BlokDust is created by Luke Twyman, Luke Philips, and Edward Silverton. Developed in Brighton UK and released in 2016.

The web itself is well designed. I really like the interface. It’s pretty clean and has a clear guide to help the users to get started. It does improve the experience of making music and give the users a better visualization of it.

Instead of just using play and stop, Blokdusk creates some new ways to play the music with the block interacted with each other.

Examples:

Playing with the keyboard:

https://blokdust.com/?c=N1V7mjxqW&t=Cello%20Sampler

self-playing:

https://blokdust.com/?c=VkF2_je5W&t=Rotational%20Sequencer

More about BlokDust:

https://guide.blokdust.com/

BlokDust uses the Web Audio API and make use of Tone.js as an audio framework. Here is the Github link:

https://github.com/BlokDust/BlokDust

 

 

 

Midterm 1

For the midterm project, I decided to do the project called “Create your own Mondrian”. The idea is based on Piet Mondrian’s art pieces “Composition with Red, Blue, Yellow”. Since this series contains only grids and color blocks, I want my project to be created different images with these elements by changing the position and color. People will interact with mouse click to change the position of the intersection points and key pressed to change the color. The color I choose Mondrian’s red, blue, yellow, black and white.

Skills to realize: set variables, random, key pressed, mouse pressed.

Difficulties: random from limited choice(array?)