Saturday, December 5, 2009

Photography: Nikon D3000: Pencils



[ 50 Colored Pencils ]

Just playing around with my new camera, a Nikon D3000 SLR. It's very very very NICE! I needed something to photograph and I had all my pencils out still after some animation I was doing earlier.


[ Blue to Green ]



[ Blue Pencils ]

Programming: Java: Game: Space Shooter


[ Space Shooter 2D ]

A month or two ago I started developing a Java based video game as a way to test myself. The game in its early days looked far different from how it does now, and in between then and now has gone through many radical design changes, (At one point it was a Tank Game).

However, finally I settled on the idea of a space shooter! You play as the red ship, and have to fight off on coming waves of enemy space cruisers and space tanks.

Throughout development I have given BETA versions of the game to my friends to test it and try to break the code so I can fix potential bugs. One major bug, or perhaps not a bug but a loop hole, was that if you held the fire button down after a second or two of nothing happening, a continuous stream of missiles would fire from your ship, allowing them to blow up everything in one fell swoop.

I fixed this by making it so you could only fire one rocket per button press, and further on in development I edited this so that your gun would over heat if you fired too many missiles straight after one another, (as displayed by the Weapon Temperature bar on the HUD).

A public BETA of the game can be downloaded from here:

Robotics: GARY: Wiimote Controlled Robotics


[ GARY with Eb500 Bluetooth Module and Wiimote ]

This was a project I did a while back. I used my laptops Kingston USB Bluetooth Card and a Eb500 Bluetooth module to interface wireless communication with my robot, GARY.

At first I wrote simple control apps which either had buttons that needed mouse input or utilized the directional arrows on the keyboard to accept commands.

However, as time went by these programs improved. I started using a Wired Xbox 360 controller to send commands to the robot, and then finally it struck me. A Wiimote! I had already used the Wiimote with my computer before this. I wrote a program which used with a DIY Wii sensor bar allowed me to use a Wiimote to control my laptops mouse, including mapping the left, right, and double click actions to various buttons on the controller.

Utilizing the software classes I had already wrote to interface with the Wiimote's Accelerometers I was able to write new software for GARY and my computer to allow the Wiimote to control GARY using my Laptop as a "hub" to boost the signal range of the Bluetooth setup.

To ensure that the motion of the robot would be as smooth as possible I developed "Predicted Acceleration Curves" for each of the robots two drive motors.


[ Predicted Acceleration Curves ]

Note:
  • Curves for the left motor displayed with solid lines, the right motor with dashed.
  • Red lines are for the Wiimotes X Axis, and control forwards/backwards motion.
  • Blue lines are for the Wiimotes Y Axis, and control left/right motion.
  • The Wiimote's Accelerometers (Graph X Axis) Go from -100% to 100%.
  • The Robot's two drive motors (Graph Y Axis) Accept speed values from 650 to 850, where 750 is stationary.
By using these curves at run time the robot was able to generate the correct speeds for each of its motors to ensure smooth motion. This is done by solving each of the four curves for the value given by the Wiimotes Accelerometers, then averaging each side. eg. Averaging the motion and turn curve values for the left side yields the final speed for the left motor; and averaging the motion and turn curve values for the right side yields the final speed for the right motor.

Here is a video of the software and robot in action.

Enjoy!



Animation: Pixlation: Cube

[ Cube ]

An attempt at Pixlation, or the animation of inanimate objects. I think it came out quite well, it went into the Menger's Sponge Project. A 3D representation of the Menger Fractal Algorithm built entirely out of business cards. The project is almost complete now as it only requires another 5 stage 1 cubes to complete it.

The cube built in this video is a stage 0; a stage 1 is 20 stage 0's; and a stage 2 is 20 stage 1's or 400 stage 0's...


[ Stage 2 Sponge ]




[ Stage 2 Sponge ]
Enjoy!


Animation: Claymation: Alien

[ Ripley, Newt, and the Alien ]

Probably my funniest Claymation, it won me the award for "Best in Animation" at the 2009 Hopkinton Film Festival. It took about a week to film and edit.


[ Award for Best Animation in Show, 2009 Hopkinton Film Festival ]

Enjoy!


Animation: Claymation: Mush-A-Morph

[ Morph from Mush-A-Morph ]

Arguably my best Claymation, at least its the one with the most views. This is the product of sitting in the dark room in the photo lab for 3 hours and with a camera and a lighting rig.

Enjoy!


Animation: Pencil: Final Pencil Test


[ Frame 72: Pencil Test Final ]

A change from my usual Clay based animations, I decided to draw one by hand. It's pretty basic, I'm not exactly going for an Oscar with it. The man walks in and through the doorway, then is teleported up into the sky and falls down, crashing through the floor and taking the world with him.

Enjoy!


Programming: Java: Video Tracking: Color Recognition



[ Effect of Sobel Edge Detection Shown in Yellow ]
[ and Color Detection Shown in Blue ]
Intro:
I first experimented with Video Tracking about a year ago, at the time I was programming primarily in VB.net 2008. A powerful language yes, but when it comes to graphics and "time expensive" processes, its bad side begins to shine through.

About a week ago I decided to reopen this project and give it another go, this time in Java. My reasoning? Java runs a lot faster and is much better at handling color arrays, the basis for image recognition.

Summary of technique:
There are many methods for scanning video feeds, none of them perfect. They all have there downfalls. The key of a good tracking program is to use multiple of these methods together so that one methods downfall can be picked up by another ones strong point.

After much research, the types of image recognition I chose to develop upon were:
  • Sobel Edge Detection
  • Color Recognition
  • Motion Detection



[ Effect of Color Detection Shown in Blue ]

Color Detection Algorithm:
Color Based Tracking is a method by which an image or video feed is scanned for a given color. Being the most common method of object recognition it has been interpreted in many different ways. After researching what the method actually needs to do I decided on a system by which the user clicks on the the part of the image where the color they want to track is, then the program uses this color reference in its scan.

To do this I had to write multiple systems into the program. First, a system by which to get the color when the user clicks on the image feed. Second, a system to display this color choice to the user along with the threshold for the scan. And thirdly, the SCAN!

The scan is what I'll focus on in this article. It consists of a system by which we loop through each X and Y pixel of the image and compare the GrayScale (Black and White) version of the image to the GrayScale version of the given color. We do this because it is faster than checking the Red, Green, and Blue Channels of every pixel in the image.

Once we have decided that a pixel is likely to contain the search color, we flag it by coloring it with a semi transparent blue pixel. Then we continue to scan to check if the Red Channels of the pixel and the Search Color are with the threshold of each other. If that checks out then we check the Green Channels, and finally, if that checks out we try the Blue Channels.

Once we have confirmed that the given pixel is the correct color we mark it with a solid blue mark.

Color Example Code: Java

public void Update(BufferedImage buf) {
        img = buf; //Store the image
        int w = img.getWidth(); //Image Width
        int h = img.getHeight(); //Image Height
       
        FlagMap= new BufferedImage(w,h,BufferedImage.TYPE_INT_ARGB); //Create Overlay
       
        for (int x = 0; x < img.getWidth(); x+=scan) { //Scan through each X of the image
            for (int y = 0; y < img.getHeight(); y+=scan) { //For each X scan through each Y of the image
               
                Color C = new Color(img.getRGB(x,y)); //Get the pixel color at the current position
                int GS = ((C.getRed() + C.getGreen() + C.getBlue()) /3); //Generate the GrayScale                                          
                //Color Detection
                if ((AvgT - ct) < GS && (AvgT + ct) > GS) { //Is the GrayScale is within threshold?
                    FlagMap.setRGB(x,y,FlagColor.getRGB()); //If so mark with a semi transparent blue mark
                  
                    //Is each color channel within threshold?
                    if ((TColor.getRed() - ct) < C.getRed() && (TColor.getRed() + ct) > C.getRed()) {
                        if ((TColor.getGreen() - ct) < C.getGreen() && (TColor.getGreen() + ct) > C.getGreen()) {
                            if ((TColor.getBlue() - ct) < C.getBlue() && (TColor.getBlue() + ct) > C.getBlue()) {
                                FlagMap.setRGB(x,y,DetectColor.getRGB()); //If R,G, and B Channels are with threshold mark with a solid blue mark
                            }
                        }
                    }
                }
                           
            }
        }
}

Notes on example code:
  • The variable "scan" is an integer and refers to the accuracy of the scan. eg. scan = 1 would mean that every pixel of the image would be scanned. scan = 2 would mean every other pixel would be scanned. scan = 5 would mean every fifth pixel... ect, ect...
  • The variable "ct" is an integer and refers to the Threshold for pixel comparison. This is set in the program via the Trackbar in the Top right of the Screen which allows for differences of 0 to 40.
  • The variable "TColor" is a color and refers to the color the user wants the algorithm to scan.
  • The variable "AvgT" is an integer and refers to the average of variable "TColor"'s R, G, and B Channels.
  • My program has a separate class for displaying the image to the window, it draws the actual image first then draws what are called "overlays" on top of it. In this case the overlay is the image "FlagMap" which is transparent everywhere except where an edge has been detected.
  • For best results, downsize your image to around 200x200 pixels before passing it to this algorithm and set "scan" to 1. If you want to scan a larger image you will need to set "scan" to a higher value to maintain performance.
I hope this helps anyone who is trying to do something in this field, it's a tricky one. Please subscribe, more like this is on the way!

Programming: Java: Video Tracking: Sobel Algorithm




[ Effect of Sobel Edge Detection Shown in Yellow ]
[ and Color Detection Shown in Blue ]
Intro:
I first experimented with Video Tracking about a year ago, at the time I was programming primarily in VB.net 2008. A powerful language yes, but when it comes to graphics and "time expensive" processes, its bad side begins to shine through.

About a week ago I decided to reopen this project and give it another go, this time in Java. My reasoning? Java runs a lot faster and is much better at handling color arrays, the basis for image recognition.

Summary of technique:
There are many methods for scanning video feeds, none of them perfect. They all have there downfalls. The key of a good tracking program is to use multiple of these methods together so that one methods downfall can be picked up by another ones strong point.

After much research, the types of image recognition I chose to develop upon were:
  • Sobel Edge Detection
  • Color Recognition
  • Motion Detection 

 [ Effect of Sobel Edge Detection Shown in Yellow ]

The Sobel Algorithm:
Sobel Image Analysis is a way of scanning an image and trying to detect where the edges of objects are. This can be especially important if you are scanning high resolution video feeds by first using the Sobel Algorithm to mark points of interest in the image, negating the need to scan the entire image for color or motion.

The algorithm works by looking at each pixel of the image and comparing it to the eight pixels surrounding it. If a pixel surrounding is within a given color threshold then the point is flagged (in my program edges are flagged by a yellow point).

Sobel Example Code: Java

public void Update(BufferedImage buf) {
        img = buf; //Store the image
        int w = img.getWidth(); //Image Width
        int h = img.getHeight(); //Image Height
       
        EdgeMap= new BufferedImage(w,h,BufferedImage.TYPE_INT_ARGB); //Create Overlay
       
        for (int x = 0; x < img.getWidth(); x+=scan) { //Scan through each X of the image
            for (int y = 0; y < img.getHeight(); y+=scan) { //For each X scan through each Y of the image
               
                Color C = new Color(img.getRGB(x,y)); //Get the pixel color at the current position
                int GS = ((C.getRed() + C.getGreen() + C.getBlue()) /3); //Generate the GrayScale                                          
                //Sobel Algoritham
                boolean W = false; //Switch for whether the point is an edge
                if (x > scan && x < img.getWidth()-scan) { //Check we aren't on an X edge pixel
                    if (y > scan && y < img.getHeight()-scan) { //Check we aren't on an Y edge pixel
               
                        for (int u = -scan; u <= scan; u+=scan) { //Scan the surrounding X pixels
                            for (int v = -scan; v <= scan; v+=scan) { //For each X scan the surrounding Y pixels
                           
                                if (u != 0 && v != 0) { // Make sure we aren't checking the pixel against itself
                                    Color B = new Color(img.getRGB(x + u,y + v)); //Get the current boarding color
                                    int BS = ((B.getRed() + B.getGreen() + B.getBlue()) /3); //Generate the GrayScale
                                    if (((GS-BS) < t && (GS-BS) > -t) == false) { //Are they different enough?
                                        W = true; //If so set the switch to TRUE
                                    }
                                }
                            }
                        }
               
                    }
                }
               
                if (W == true) { //Did the pixel had at least one boarding pixel that was different?
                    EdgeMap.setRGB(x,y,EdgeColor.getRGB()); //If so then draw a yellow pixel on the overlay
                }
                //End Sobel
                           
            }
        }
}

Notes on example code:
  • The variable "scan" is an integer and refers to the accuracy of the scan. eg. scan = 1 would mean that every pixel of the image would be scanned. scan = 2 would mean every other pixel would be scanned. scan = 5 would mean every fifth pixel... ect, ect...
  • The variable "t" is an integer and refers to the Threshold for pixel comparison. This is set in the program via the Trackbar in the Top right of the Screen which allows for differences of 0 to 40.
  • My program has a separate class for displaying the image to the window, it draws the actual image first then draws what are called "overlays" on top of it. In this case the overlay is the image "EdgeMap" which is transparent everywhere except where an edge has been detected.
  • For best results, downsize your image to around 200x200 pixels before passing it to this algorithm and set "scan" to 1. If you want to scan a larger image you will need to set "scan" to a higher value to maintain performance.
I hope this helps anyone who is trying to do something in this field, it's a tricky one. Please subscribe, more like this is on the way!