Final Project Milestone 3

Hello Everyone,

For my final project, I made a video game about NYU. The game is titled The Game of Life: NYU.

I made it using 4 different libraries:

P5 Dom
P5 Sound
P5 Scene Manager
P5 Gif

For the project itself the concept is as follows:

You’ve been accepted to college, you are now part of the millions of people around the world that spend hours studying for an exam every week. Now that you are part of this community, your task is to graduate. During the path to this goal, you will encounter many challenges and rewards. You will meet people, join clubs, receive internship opportunities, eat food but you will also make bad decisions, pay tuition and you might even have crisis regarding your major.

All of this will take place in the context of NYU. This is a game that will have a total of 5 levels. Levels will be ordered like college (Freshman Year, Sophomore Year etc.) The final level takes place after completing the senior year level. This will be a boss level where you will face the boss Andy Hamilton to receive your diploma. Once you receive it, you beat the game.

For the game these are the controls:

Left and Right for the Player movement
> Space Bar for to pay tuition
> For Game Over:
> 1 to reset game
> For Intro:
> 1 to start game
> 2 to go into options
> For Options:
> 1 to go back
> Sliders to adjust volume

Sprites of the game:

What I have learned:

I’ve learned that spending time on code is actually really fun to do. As a CS Major, I got the opportunity to explore the realm outside of what the CS department gives me. I have learned about working with different libraries, and APIs. I have learned about coding in javascript and in java while also picking up processing as a skill along the way.

For the project itself, I have learned that collision detection is one of the most important aspects of game development. Although most of my collision detection is 2d, spending time making a video game has given me the opportunity to break down a project into a variety of different sections. I have also learned that working with the gif library was difficult. The end product, unfortunately, will not use any gifs.

Here is a link to the game:

www.openprocessing.org/sketch/543417

If you guys have any questions about the code, please let me know. The comments I left can hopefully guide you.

 

Final Project

For my final project, I created  a virtual pet game.
I wanted to create a game that is funny and make it seem like one of those bad games you can find on sketchy websites. My inspiration came from Tamagotchi and Nitendogs, two games that I enjoyed playing as a kid.

When you click camera, the webcam image becomes the background of the screen. Depending on what the pet is demanding, the player has to click different needs that the pet needs in order to grow the pet. In the end, the pet rock turns into The Rock. After this transformation he claims that he doesn’t need anymore help and that he is on his own. Each demand is timed, so if the player doesn’t help the pet in time, it dies. It is funny how easily this pet dies, considering it is a pet rock. There is a similar game to this, called “Survive! Mola Mola” which is a game where you take care of this ocean sunfish. Despite the size of this fish, it too easily which is why people find this game so humorous.

The biggest struggle with coding this was simply my malfunctioning laptop and the website closing on me. It repeatedly happened to me so I lost my progress multiple times. Once I was done with my other final projects in different classes which required Adobe programs, I was able to clean out my laptop and have open processing run more safely.
Another problem was the timing, it was hard having the demands timed to a specific time. This required a lot of lines of codes.

I wish I had more time to work on this project, and I am actually planning on working on it during summer to have something more polished and more efficient. It was a struggle but this class allowed to me create something online from scratch which is something most people don’t get to do. I enjoy making these silly games and get them to actually work.

My code can be found here:

Final project

I am satisfied with how my sound sequencer turned out. I set upon creating something that the user would like to play around with and get a little lost doing for a little while and I wanted it to make it easy for anyone to spend a little time doing something small and ending up with something that sounds very pleasant. I was inspired by the google audio code project, and the appeal of audio visualizations.  However, it is not perfect, and I don’t think I achieved that goal of having it feel completely seamless.

There are quite a few problems with the way it works right now. One, the code is set up so that the program loops through all the circles many times in the draw loop, and there are many circles to generate in the visualizations, and because of all these for loops, the sketch lags when there are too many notes. This is problematic because the user is restricted on how many notes he should be playing at a given time. However, I personally like to think of it as a good thing in the sense that if the user is restricted to using few notes, it might inspire a little more creativity. Despite that, it doesn’t aid in a seamless experience for the user to do whatever he or she wants.

Another problem of the sketch is due to the way Processing handles time. Because the draw has a framerate, the milliseconds function only refreshes every frame. This causes the program to be inaccurate and create an inconsistency in the spacing between notes that the user can easily notice it. Although the inconsistency is up to low amount of milliseconds, the ear is very sensitive, and it can pick up on these things and take the user out of the experience. This is something I struggled with dealing with in my program, because time management is so integral to my project. I had to learn how to be clever with handling the coloumns of the notes and use a variable capturing the last millisecond measurement and find the difference.

However, what I do have now are circles falling down from the top of the program, and their position representing their relative frequency. I chose the color pallete to turn it more pastel and playful, and due to the way I ordered the FFT bands, it looks like a nice ice cream cone falling. The visualizations are something I definitely improved upon from my presentations on Friday. I gave the visualizations more meaning for the user. I also added a tempo setting, which the user can have a lot of fun and creativity with.

Documentation

https://github.com/ra2353/Creative-Coding-S18/tree/master/Soundcircles

Final Project summary

For my final project, I set out to create a game that would teach people how to recycle. I think that at the end, I’m pretty satisfied with the outcome and feel that I’ve accomplished most of what I wanted to do. I think one of the things that was successful in the project was the serial communication between the “Arduino” and Processing, which itself took a week to figure out. I later went back and removed the motion sensor I had in there originally because it was too sensitive, and made do with three buttons instead. I also think I reached my goal of creating a game that was both educational and visually appealing, as I spent a lot of time on the visuals to make the game pop out more. However, there are also a lot of things that I wish I had more time to do, and would like to work on moving forward. One of them is a timer function, something like a 3, 2, 1, go mechanism at the start of the game, which was pretty complex and something I didn’t quite get to. I also would’ve liked to add audio, like the sounds of objects falling, movement, background noise, etc. as I think it would’ve added a lot to the experience. Another complicated feature I would’ve liked to add is a high score board where players can enter three characters “names” and have a total of maybe 5 high scores that would stay on the end screen, even after the game is closed.

Link to sketch on OpenProcessing (though it won’t work): https://www.openprocessing.org/sketch/546197

Documentation: documentation

Final post

I had begun to use the kinect contributed libraries and had gotten a rough understanding of some of the differences between three of them. For my final I planned on using the depth and motion tracking capabilities of the kinect to create an interactive display using scripted signals to create an “in hands “3d” object generate. Originally, my  plan was to use a 3d terrain like discussed in class when a person moves there hands into a certain threshold range and apart a certain distance the object would display. The idea is that the person in front of the camera would be moving the terrain with hand motions and the person at the computer would switch between which object is generated for the user to interact with. I was able to execute the first part in displaying the object, however, I didn’t have enough time to consider different terrains or objects to display. I believe my biggest challenge had to be the hardware as most of the library is pretty easy to understand and there isn’t really that much you have to alter in order to display something using the kinect. Its just that due to the subtle differences in the libraries themselves changing one kinect as I soon found out can either have a detrimental impact on how the code executes or that the kinect won’t be recognized at all effectively making all your hard work go to waste.

Shown above, i was able use the Kinect v2 library to get the concept for my other sketch done in time, however, i was unable to get a documented video of my code that I did for the original kinect (kinect 1414). I would like to move forward with this sketch looking more into the kinect v2 library and adding the open cv library in order to change the display space aswell. i think it also might be cool as the groundwork for an interactive display showcase for other peoples art and codes.

Here are my codes in openprocessing:

-in class display(kinect v2):

https://www.openprocessing.org/sketch/547264

-actual final( non functioning without kinect 1414):

https://www.openprocessing.org/sketch/547258

 

Final Project

For my final project, I set out to use the tools and techniques I had learned during this semester to create an interactive map that people would be able to engage with to find which of multiple dinner parties was closest to them, and to help them get connected to the people hosting it.

 

You can find my code here!

One of my biggest frustrations with the previous coding classes I’ve taken is that most of the work never goes beyond the complier, and is not something people can find easily and understand what’s happening. So for this project, I wanted to stretch and try to bring an idea fully to realization, incorporating it into a website that would be easily found.

I utilized the google maps API to create the map, and going into this project I thought I would have to pixel-map the markers in order to create them, and using p5.js create functions to trigger events when they are clicked on. Once I did my research after getting into the project and learned about all of the functionalities that the Google Maps API provides, I learned that I was able to do all of that through the API, in a way that is designed much more beautifully and seamlessly than something I would have been able to do.

It didn’t take me long to get from step 1 (creating the map and adding the points and events) to step 2 (embedding it into the map and giving the user ability to manipulate the canvas), but it took forever to try and figure out step 2. This project started off appearing very ambitious, then through the API’s seemed much simpler, then became nearly impossible once I dove into the computer-generated code from squarespace that made up the website I was seeking to embed my map into.

I feel that I made two critical mistakes that led to my project not finishing as I imagined it would. 1) I made a (rather large) technical aspect of my project based on skills that I hadn’t quite developed yet. I wanted to stretch but I should have stretched within the bounds of the canvas and p5.js, instead of reaching out. 2) My project was dependent upon understanding and interacting with code that I didn’t have any documentation on, connection to, and that was not written for humans to interpret. This left me stuck at the second stage for the majority of my time working on this project, troubleshooting visibility issues and errors that popped up out of nowhere that were hidden within several hundred lines of code.

Overall, I am proud of myself for being able to access and understand using the Google API, which I can see being very useful in the future, as well as navigating the different challenges of this project, from acquiring server space to combining javascript with html and css, learning how to work with a project that spans multiple files, and how to ask for help when I know I need it. One of the best classes I’ve taken at NYU thus far. Thank you Scott.

Final Project Documentation- Bird Sound Game

For my final project, I have created an audio focused game that is played by a user listening for a bird sound’s in various scenarios. A user then can press a key if they think they heard the bird and based on; if they were right or wrong. The user would then gain points or loose attempts. If you use up more than three attempts you lose the game. The game really requires a user to use their listening skills and tests the user’s ability to focus. Below I have attached a picture of the games start screen. The link to the game and all of its code is:

https://alpha.editor.p5js.org/shahriarsadi98/sketches/BkHJGovhz

A link to the google drive to access much of the games audio files and visuals:

https://drive.google.com/drive/folders/1hJmN0dj4OaV0l20YK7rFIx935Wz5Lb6X?usp=sharing

I created the game solely in p5js, I chose to use p5js over processing in Java is because of the libraries in p5js, the most important of which was scene manager. Scene manager allowed me to create a start menu, and instruction menu, a game function, winner and looser screens. Using scene manager I was easily able to switch to those scene’s and not have one scenes code effect others. Below I have attached an image of some snippets of the scene manager code. I also had various menus with buttons which would lead to new scenes and having scene manager really made the process easier.

Another crucial aspect of the game was a timer, I need the timer in order to keep track of what time the user clicked on the key and if it matched up with the correct time the bird sound played. The timer had to be activated exactly when the user was in the game function menu. I did not want how long the user was on the start menu to affect the game. So I created a variable called offset to mitigate that issue and subtract it from a variable I had called game seconds. Below I have attached an image of some of my code for the timer.

An issue that I ran into while making the game was that the audio would run in a never-ending start loop, that sounded screechy. I mitigated this by creating a preload function and the audio played smoothly. Scene manager made it easy so the audio files only played when they were in their given menus. I created a point system and attempt system under my key pressed function and setting if statements to check when the key was pressed and to act accordingly. I have attached below, screenshots of both the attempts and score code snippets.

My game also has this visualizer which I coded to move with how loud the sound is in game. To give the user a visual in to maybe help the user in a way and overall I think it looks cool and helps with the overall game aesthetic. One addition I wanted to add to that portion was color changing based on if the user gets an answer right or wrong. I tried to get it to change color, but everything I tried simply did not work. Hopefully, in the future, I can fix this.

For the future of this game, I would like to do two things, one have the game be more automated and random. Right now the game follows a sequence if the audio clips and random and unpredictable in when the bird will come. It will make the game more challenging and fun. I already have an idea on how to achieve that. Which would be to upload all bird noises and scenarios into p5js and then creating an algorithm to play the sounds randomly and then a if statement to see if a specific bird sound is played and the user clicks a button then give a point. Another addition I would like to add is a high score system and then a twitter bot that would tweet out anytime the high score is beat. I believe both of those are possible and something I will be working on over the summer.  I hope you enjoyed my final and having me as a student. I really enjoyed your class and learned a lot. I hope to one day have a class with you again thank you very much for a wonderful semester.

If you would like to see the slides I have attached them below here as well.

Bird Audio Game Final Presentation

midterm reflection- late(read disclaimer)

So starting off reading chapters 10 and 11, I can immediately point out the fact that my coding flies in the face of the teachings of this chapter. I have very little in the way of organization and my coding is more of an  amalgamation of then an organized set of well thought out functions. I thick that part of this lies in the fact that I originally didn’t know how to do what I wanted and ended up with a lot of errors in my code and in trying to brute force my way past these problems I ended up coding very inefficiently. I recognized that I needed help and took action to receive it however had I read these chapters before hand I might have been able to understand and work through some of the problems that came up rather then just give up and seek tutoring.

 

DISCLAIMER: THIS IS LATE DUE TO UPLOADING IT ON THE WRONG WORDPRESS BLOG!!!

Final Project Documentation

For my final project, I have created a code for 5 min live coding performance. Initially I wanted to simply make a visualization of a song that I like but as I progressed and learned more about gibber library, I started to make my own sound. The interactive experience that my project provides is a little different from the interactive sketches that I’ve created previously, since the audience is just watching me interact with my code. However, I kind of fell in love with this form of performance and live coding and I would like to explore further into this field in the future.

My code is written in the order of execution rather than being organized by each object or sound element. This means a code for manipulating the element of the variable ‘b’  might be written in between ‘e’ and ‘f’. Also, each body of code divided by the spacings is meant to be highlighted all together and executed all at once.

The main idea behind the progression of this audiovisual performance is that it starts off with a stable and rhythmical audio and visual pattern and as the code progresses the pattern loses its stability and goes into a complete chaos until a voice calls out for help. The rhythm of drums, plucks, and bass from the beginning matches (kind of) and putting them in a melody (D, E, Bb, A) makes the sound stick to your head. Then, I introduce a sad man’s voice with a white noise, indicating a transition. The sounds and visuals introduced after this are less rhythmical and more dreamy and formless. On top of these dreamy synth sounds, I start to add distortions and panning to break the sound even further. The visual is pixellated and modified with multiple filters/shades to reflect the sound. Although many components of this performance are organized and planned, several parts, such as notes and rhythms for each instrument/sound, were randomized so while practicing the performance, I would like one performance better than another one because it sounds better. I ran into many cases where the plucking (has a random rhythm) did not quite match the drums, so I would panic from the beginning of the code but I would just figure it out by changing the rhythm or just moving on.

The work process had three parts: 1) putting together the audio element. 2) making each section of visualization. 3) Integrating them all together and deciding the order of the code for the performance. Part 1 was fun and only required me to decide on instruments that I want, notes, and rhythm and put them together so that the whole thing sounds good to me. It started to get slightly more challenging with from part 2 because the visual elements should reflect on the audio that I’ve created. One thing that I learned during this process is that a good sound visualization does not always reflect the master output. For example, the part where I introduce the robotic voice, I made the visual reflect the voice output rather than the master output because that makes the visualization more dynamic and look authentic to the audience. The last part was about organizing everything so that I can come up with something complete and presentable.

One difficulty that I faced that I could not solve was the problem with freesound.org API. I wish I had recorded the sound when the API worked because that I really liked how the freesound samples worked with my sound. gibber was created 5 years ago and after a while it started to have problems with the API. Charlie Roberts, the creator of the library, came up with a fix 2 years ago, but I assume freesound has changed its API policy and included a different authentication process since then.

Anyways, I had a lot of fun with this project and I wish to learn about other techniques of live coding so that I can build a solid knowledge and skills around this type of coding.

Code: https://www.openprocessing.org/sketch/546066

Recording: https://www.youtube.com/watchv=ubU9xgEJvzI&feature=youtu.be

Final Project

For my final sketch, I wanted to create a music visualizer that the audience could also add sounds to. As the project progressed, I got carried away making the visuals and due to the lack of time I opted out of preloading sounds that the audience could activate by pushing the keys on the keyboard. However, that is something I would like to explore in the future, especially in a way that would have the audience add something that doesn’t sound “bad” to the music.

My final sketch consists of a title screen window with “stars” dotting the screen and a button that starts the song. When the song starts, it turns on the computer’s camera and the subsequent visualizations would consist of video manipulation or added elements to the screen in time with the song. And so, for this project I used p5.js’ DOM and sound libraries. The visualizations includes changing the video’s color, adding lyrics, adding a “static” effect, adding a “glitch” effect, adding a “lag” effect, adding a zoom effect, adding a pixel effect, and adding a pixel/grayscale/moving effect.  Unfortunately, I was ultimately unable to connect all the pieces of my project into one sketch; I had trouble connecting the video manipulations but I was able to successfully include the added elements like color, static, and lyrics. While I am disappointed I was unable to fully realize my project, I’m glad that I was able to successfully create each video effect individually and independent of one another. I really enjoyed creating these sketches and was able to explore video manipulation in depth and in creative ways; in the future, I would definitely like to be able to connect all of the sketches together into one piece and to also continue to create sketches alongside music and further explore the different visualizations I can make!

Link to final sketch: https://www.openprocessing.org/sketch/546049

Link to individual video effects: https://www.openprocessing.org/sketch/545819 (tint), https://www.openprocessing.org/sketch/545846 (lyrics), https://www.openprocessing.org/sketch/545911 (zoom), https://www.openprocessing.org/sketch/545914 (lag), https://www.openprocessing.org/sketch/543849 (pixelates), https://www.openprocessing.org/sketch/545986 (glitch), https://www.openprocessing.org/sketch/546013 (static), https://www.openprocessing.org/sketch/546018 (movement)