Monday, December 19, 2011

glCreateShader???

I have implemented the real-time images from camera preview to a canvas as a Opengl texture. The only problem is that I could not successfully create and combine the self defined shader. I tried many ways and could not find where the problem lied. Hopefully I could figure it out soon.

Thursday, December 15, 2011

Sepia tone on Android phone

I can implement sepia tone on Android camera preview, to generate pretty decent old film effects. I believe many other effects could be generated in similar ways. However, since it is still on CPU, fps is as low as 4f/s. Trying hard to get glsl shaders working on the phone.



Cheers!

Monday, December 12, 2011

Could Modify View!

Just figured out how to modify the view on the canvas of the SurfaceView Class! So excited!

Instead of using onDraw() function directly in the CameraPreviewView(which inherits from SurfaceView), I used Handler to post the update() function in the ViewGroup, and the ViewGroup's update() function calls the update() function in CameraPreviewView class. Here, it calls postInvalidate() function which calls onDraw().

The pipeline is from this link(sorry it is in Chinese, but the code works well to display a blue box moving from the left to the right of the screen): http://l12052124.iteye.com/blog/745232

On the other hand, I created a Bitmap variable in the CameraPreviewView class and update it to the bitmap which comes from the camera's modified preview data. The class's onDraw() function directly draw this Bitmap to the UI. That works!

However, since all of the operations are based on CPU, the frame rate is still low. But here comes the core part!

Two question left: The algorithm I am using to convert camera preview data to Bitmap converts the data to RGBA, but the values for RGB are in the range of 0 to 262143, which is 512 * 512 - 1, instead of 0 to 256.

And, how does postDelayed() function work in the update?

Could anyone shed some light on them? Thanks! Cheers! Move On!

What is now on Android Phone

This is basically what I have now on the Android phone. Above the camera preview scene I can now embed a new bitmap. Now I am trying hard to convert the apple to the modified bitmap from the preview data. Cheer up and carry on!

Solving view problem

Although I could draw another surface view on the original one, it is really hard to display the images from the camera preview to the canvas of the new view. Still working on it. If I could't make it in one to two days, maybe I have to change some strategy. I'll try my best!

Thursday, December 8, 2011

SurfaceView Implemented

Implemented a SurfaceView over the default camera preview. Now working on modifying the SurfaceView with bitmap which comes from modified camera preview. This will use parallel computation with GPU.

Friday, December 2, 2011

Implemented Android Camera Preview

I have just implemented the camera preview setup using Java code. However there are still some bugs. For example, the app would crash down if we click on the "take photo" button. I am trying to modify the preview scene with old movie effects and dealing with the bugs at the same time.

Monday, November 28, 2011

Working on Android Camera

I have figured out debugging on the android phone via Eclipse and testing android camera commands.

Wednesday, November 16, 2011

Grainy Effect Implemented

I have simulated the grainy effect as shown below:
For this I firstly generate a random integer between 1000 to 2000 in C called "rand_seed" and pass it into the GLSL fragment shader as a uniform variable. In GLSL I used the sin function as the pseudo number generator. int rand = int(abs(sin(rand_seed * fs_Texcoords.x * fs_Texcoords.y)) * 10000); rand = rand & 63; out_Color *= max( 1.0f, (float(rand) / 64.0f + 0.5f)); The first line generate an integer according to the fs_Texcoords. This process uses the third and fourth digit after the decimal point by doing bitwise and operation with 0x3F. Then the result is divided by 64.0 to get the grainy weight of the current pixel. Since the color of each pixel should be mostly determined by the original picture, I decrease the influence of grainy effect by half using the max( 1.0f, (float(rand) / 64.0f + 0.5f)) and multiply the result to out_Color. Because the rand_seed is updated every frame, this method could easily create dynamic distributed grainy effects over time.

Scratches Added

I have added Scratches to images using GLSL. I added a new layer with the scratches image as a new texture.
Then I multiplied the color from the scratches layer with the original image pixel by pixel and thus creating the blended effect.

Tuesday, November 8, 2011

Sepia Tone Filter & Lens Blur

I just implemented Sepia Tone Filter and Lens Blur to images. Below is the original image:
For the Sepia Tone Filter, I used a linear equation to convert the color from the original image as a texture map. The image after filtered is shown as below:
For the lens Blur effect, I used a simple circle to distinguish between the center and boundary area. Then for the boundary I applied box blur and controlled the size of the box linearly to the pixel distance from the center. The size of the box means the scale of the average interpolation surrounding the target pixel.
These two effects are easily combined. Since Lens Blur requires communication between pixels while Sepia Tone Filter operation is independent to each pixel, I firstly get the blurred image and then apply the Sepia Tone Filter to the modified color value. Below is the final result.
I will implement scratches and maybe also grains later this week. I will also do more research into Android SDK and camera API. Cheers!

Friday, October 7, 2011

Pitch -- Found Footage Recorder on Android Phone




Robin Jian Feng

Smartphones bring tons of fun to modern life. One of the key features smartphones have is the camera. Besides shooting photos at any time and place, there are also many applications to add more interesting special effects.

However, most of the applications on mobile phones are only for still cameras or photo editors. In addition, the applications are mostly independent instead of real time. There are actually a lot of software on PC like AfterEffects that could deal with video processing, but what I want to make is an interesting plug-in to android phones that could make their video recorders more amazing, funny and interesting.

I would like to start from the effect of found footage. This is not only a typical art style in film industry, but it could also bring more fun to create self-made videos like old documentaries. Technically speaking, the color change and noises in the found footage are also a very interesting image processing topic that could be implemented by GPU algorithms.

I will do research into different types of found footage effects(for example, black and white, color, etc), then build models to simulate the noises, color changes, frame distortion and other features. Meanwhile, I will also implement the algorithm in parallel computation in GLSL with mobile phone GPUs.
In the next few weeks I will first run the project and debug on a simulator on PC and finally convert it to a real android platform. For the core algorithm, I will also test it with still images before applying it to videos. If there is more time, I will try to implement more effects besides found footage to make the plug-in multi-functional.
Below is some videos that use found footage effects.

http://vimeo.com/16766778
http://www.youtube.com/watch?v=Dwu5jI_2ayA
http://www.youtube.com/watch?v=cd6JYfR3C-0


Here We Go!

Who's This?

Hey this is Jian Feng(you may also know me as Robin.), a second year master student in Computer Graphics and Game Technology at UPenn.

This blog is primarily for the final project of GPU class. I will develop a mobile application using parallel computation on GPU. Thank you for keeping an eye on it!

Cheers!