Kinect v2 – Developers Preview

Finally My Kinect v2 – Developers Preview arrived. Looking forward to play around with this thing. I hope that i will be able to incorporate it into my project on Object Tracking in Real-time 3D Reconstructions

Kinect 2)

Object Tracking in Real-time 3D Reconstructions

As of today I am beginning a new project “Object Tracking in Real-time 3D Reconstructions ” . My goal is to write blog post about the project as it develops. The project expands on previous work done for my bachelor thesis

The Kinect Fusion papers [1] describes the scanning algorithm that integrates scanned date over time into a TSDF volume, in order to improve quality. It does so both in a foreground volume and in a background volume. In our thesis this was not the case for the foreground volume. Instead a direct insert were used for each frame. In order to improve quality in the FG volume a separate ICP alignment must be carried out. An example of the improved quality for the foreground volume can be seen in Figure 3

Figure 1: The original kinect video from Microsoft Research

Figure 2: Demonstration of our finger paint application

What’s Next

The Kinect Fusion papers [1] describes the scanning algorithm that integrates scanned date over time into a TSDF volume, in order to improve quality. It does so both in a foreground volume and in a background volume. In our thesis this was not the case for the foreground volume. Instead a direct insert used in each each frame. In order to improve quality in the FG volume a separate ICP alignment must be carried out. An example of the improved quality for the foreground volume can be seen in Figure 3

Foreground ICP Alignment

ICP Alignment in the foreground volume can have some interesting additional applications. Not only can the quality be improved but it is also possibly to track orientation and position relative to the camera of a foreground objects Figure 4.

Project Goal

An implementation of the object position and rotation tracking is the main goal of this project. However first quality improvement of the foreground should be implemented.

Figure 3: Improvement in scanning quality when using ICP and on the foreground volume. Notice how the quality of the foreground scanning are of higher quality in the bottom row. Image from [1]

Figure 4: In the original kinect video it is demonstrated how a real world object are aligned with a scanning of itself. By doing so the orientation and position of the object are known

References

[1]: Kinect Fusion https://research.microsoft.com/en-us/projects/surfacerecon/

[2]: Human Interaction on Real-time Reconstructed Surfaces: Jeppe U. Walther & Søren V. Poulsen

[3]: Point Cloud Library: http://pointclouds.org/

You’re here because of a robot

Last semester i was part of a rather interesting experiment through the DTU course “02805 Social Graphs and Interactions”

The point of the experiment where as described below by Sune Lehmn

Is it possible for a small computer science course to exert measurable influence (trending topics) on Twitter, a massive social network with hundreds of millions of users? The surprising answer to that question is “yes”. That’s exactly what we did this year, using simple Python scripts and the Twitter API. Below we explain why & how + some of our findings along the way.

Read the rest of Sunes blog post here.
You’re here because of a robot

Our twitter-bot @happytoaster later to be renamed to @jinxymulan, was the winner in the course competition on getting the most followers.

Jinxy Mulan in Boston

Most interesting were the last part of the project were the bots moved to Boston. Here our twitter bot gathered data from the social network around 5 Boston based comic shops and their followers.

One of several result that were extracted from the dataset was a classifier for Commercial and non-commercial tiwtter accounts in the network. The classifier achieved an accuracy of around 80%.


Figure 1 Graph of the social network connection categorised by our classifier. Red nodes are commercial profiles, cyan nodes are personal profiles. Purple are unclassified The nodes are sized accordingly to their number of followers.


Contact me if you are interested in the full report.

Started a Company

So i have finally started my own company, next step world domination.

Human Interaction on Real-time Reconstructed Surfaces

So this summer Søren V. Poulsen and I finished our bachelor thesis on “Human Interaction on Real-time Reconstructed Surfaces”.  The thesis is inspired by the work done by Microsoft Reseach on KinectFusion and expands upon the open source implementation  PCL KinFu. Allowing us to utilize the Kinect to perform foreground-background segmentation in dynamic scenes with a handheld camera. This allows us to detect intersections between foreground and background objects resulting in the creation of a finger painting application that enables users to paint on reconstructed surfaces by directly interacting with their real-life counterparts. As our method is suited for GPU implementation, real-time performance is achieved. We find that our solution supports multi-touch and is surprisingly accurate.

 

User testing the drawing application

We are currently in the process of cleaning up the implementation for an open source release. I will post a new blog post when we have finished the process.

Regarding the project report, feel free to contact me and i will send a PDF version.

ACMSIGRAPH LaTeX and BibTeX Styles (Debian package)


When writing LaTeX documents i like to use the two column ACMSIGGRAPH style it is usually a pain to install as i keep forgetting how to do it. To help myself and others I have created a debian package. The package is only tested on my Ubuntu 11.10 install, so let me know if you guys have any problems with it.

Real time Rendering of Voxel-based Scenes

   

Spring semester 2011 I did a project on real time rendering of voxel based scenes. The project was written  with Søren V. Poulsen and Frederik P. Aalund. As part of the project we created an OpenCl / C++ implementation. An overview of this implementation can be seen in the video below.

Abstract

Volume visualization is a field with many applications, especially in the area of visualization of medical data. This project will examine rendering of volume data (represented as 3D grids of voxels) using GPU based ray casting in an interactive program. The goal is to be able to represent an entire scene (a terrain) as a 3D volume. The user will be given tools to alter and model the scene by changing materials and shapes. The motivation here is that such a volumetric scene is completely malleable – unlike terrains in normal game engines.

 

Kinect Mouse Driver

I really should remember to update my website. About a year ago i did a test on a “mouse driver” using Kinect input.

Solarized color scheme for Qt Creator and KDE

Some days ago i stumbled on the Soalrized color scheme by Ethan Schoonover . I decided i really liked this one and createde a colorscheme for Qt Creator. But why stop at the IDE, these colors also work quite nice for Konsole and KDE window backgrounds.

Qt Creator

Get this file Solarized – dark and put it in ”$HOME/.config/Nokia/qtcreator/styles”

wget -O $HOME/.config/Nokia/qtcreator/styles http://jeppewalther.com/blogContent/solarized-dark.xml

KDE

Konsole

Get this file Solarized – dark – konsole and put it in “$HOME/.kde/share/apps/konsole”

wget -O $HOME/.kde/share/apps/konsol http://jeppewalther.com/blogContent/konsole/solarized-dark.colorscheme

Application colors

Get this file Solarized – dark – kde and put it in “$HOME/.kde/share/apps/color-schemes”

wget -O $HOME/.kde/share/apps/color-schemes http://jeppewalther.com/blogContent/kde/solarized-dark.colors

Now remember to select the color schemes in Qt Creator, Konsole and KDE-Application-Apperance

ICP

Digital puppeteering with openkinect

This is a proof of concept i did, there is much to be improved on the tracking algorithm. But maybe this could be a useful tool for animators.

Kinect hacking with openkinect

Finally had some time for a little Kinect hacking.

Ubuntu 10.10 Suspend fail on Gigabyte X58A-UD3R mainboard

On ubuntu 10.04 i used this solution. But aparently the xhci module is now called xhci_hcd . So i needed to chage “/etc/pm/sleep.d/05_disable_usb3″ to

#!/bin/bash

## Unload USB 3 module before sleep

case "$1" in
    suspend)
        /sbin/modprobe -r xhci_hcd
        ;;
    hibernate)
        /sbin/modprobe -r xhci_hcd
        ;;
    thaw)
        /sbin/modprobe xhci_hcd
        ;;
    resume)
        /sbin/modprobe xhci_hcd
        ;;
esac

Knowledge Sharing

Finally the report on Knowledge Sharing in Danish Animation.

You can download it from here

Sensor-network Simulation

During a course at DTU this January, we (Søren Vinter Poulsen, Frederik Peter Aalund and Me) created this Java  Program. The goal where to create a simulation of a sensor network. The solution where designed and implemented over a 14 day period.

Download the jar file here.

You should run it with something like these parameters

java java -Xms1024m -Xmx2048m -Xss512m -jar

.java files is included in the jar file.

Opensource VFX

A listing of the best open source projects in VFX and Animation.

http://opensourcevfx.org/

As opensource in this industy is one of my key issues, i am proud to be part of this initiative.

List of Studio project (Opening up the pipeline)

Open EXR
http://www.openexr.com/

Sony Image Works (Several projects)
http://opensource.imageworks.com/

Image Engine (Cortex)
http://opensource.image-engine.com/
http://code.google.com/p/cortex-vfx/

Rising Sun Pictures
http://open.rsp.com.au/

Twitter

My wordpress posts is now auto projected onto my twitter account http://twitter.com/jeppewalther

Knowlede sharing – Animation hub.

I am currently working ( spare time ) on a project with Morten Steinbach and Aniamtion Hub. The project is an investigation on the animation industry’s attitude towards knowlede sharing.

Visuals, sensor networks

Working on some visual test for a project on sensor netowrks.

Using Java.

© 2017 Jeppe Walther

Theme by Anders NorenUp ↑