Quantcast
Viewing all articles
Browse latest Browse all 1853

CUBEFLOW

Image may be NSFW.
Clik here to view.

Stato dell'applicazione: 

Published

Genere: 

Linguaggio di programmazione: 

IDE o framework di sviluppo: 

Ricerca di collaudatori della versione beta: 

No

Descrizione progetto: 

CUBEFLOW is an application to create and explore complex, motion-based volumetric structures interactively in realtime 3D. The main purpose of CUBEFLOW is primarily intended to be a creative tool for visual arts and procedural content creation. Acting as an environment for generative arts CUBEFLOW allows for multiple applications that reach from interactive visual performance to the creation of complex 3D models.

We used Unity3D (PRO license) with MonoDevelop to develop all necessary functions and implement the interaction model in C#. The surface reconstruction algorithm is based on the well-known marching cubes, but is optimized and modified for our special demands. We integrated Intel Perceptual Computing SDK as a Unity3D plugin just like in the Unity3D framework sample provided with the SDK. For content creation (2D graphics, textures) we used Gimp, for 3D modeling we used Blender.

Uso degli strumenti/SDK Intel: 

Intel Perceptual Computing SDK

Funzioni peculiari: 

The app CubeFlow provides the user with a fluid experimental modeling experience. The user controls the app with his fingers and hand gestures. Intel Perceptual Computing SDK is used to provide the input pipeline (PXCMPipeline) for gesture recognition (PXCMPipeline.QueryGesture) and to obtain finger positions (PXCMGesture.GeoNodes) in world space (PXCMGesture.GeoNode.positionWorld). The interaction model of CubeFlow is based on two modes, the "recording mode" and the "navigation mode" which can be toggled from the UI or by thumb up/down gestures. If the recording is running, the app is in "recording mode". Incoming finger position of the left and right hand are used to generate volumetric shapes. Each recognized finger of each hand draws a circular shape into the volume. The user can vary the amount of fingers which are used for modeling and can rotate and move them around to actually create the model. As long as the app runs the recording, the inputs are continuously processed into a volumetric 3d shape over time. The closer the fingers are to the camera, the bigger the input shapes scale. If the recording is paused, the app is in "navigation mode" which helps to examine the created model. The user can rotate the model around two axis by moving one or two hands left, right, up and down. Your hand's movements are used to control the rotation when 3 or more fingers are recognized ("open hand"). Otherwise if 2 or less fingers are recognized, no hand movement is used for rotation ("closed hand"), so that the user can reposition his hand for the next navigation action. If the hands move closer to the camera, the created model can be zoomed out. Moving the hands away from the camera the view zooms in.

Lacune/aree di miglioramento dell'applicazione: 

NA

Tipo di progetto: 

Intel® RealSense™ Technology

Viewing all articles
Browse latest Browse all 1853

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>