João Fiadeiro
Case study on choreographer João Fiadeiro betweenOctober 2014 and March 2016 More

Introduction: Composition in Real Time method

Performance Studies & Information Vizualisation

Multimodal Communication & Cognition

Computer Vision & Motion Tracking

João Fiadeiro
Case study on choreographer João Fiadeiro betweenOctober 2014 and March 2016 More

Performance Studies & Information Vizualisation

Multimodal Communication & Cognition

Computer Vision & Motion Tracking

João Fiadeiro
Case study on choreographer João Fiadeiro betweenOctober 2014 and March 2016 More

Performance Studies & Information Vizualisation

Multimodal Communication & Cognition

Computer Vision & Motion Tracking

3D Annotator

Annotated videos have been used in the context of dance performance not only as a way to record and share compositions and knowledge between different choreographers, but also as a powerful learning tool. Restraining the viewpoint of the user to the recorded point of view can be an obstacle in several scenarios. Alternatives that introduce the concept of a three- dimensional space have been developed, but coming short either on the freedom of concepts that the user is able to introduce, or on resorting to a non-natural representation. The BlackBox project endeavors to fill this gap and create a new paradigm for the documentation of performance composition by augmenting a 2D video annotator with 3D visualizations of the resulting annotations.

Our initial approach consisted in extending the existing functionality of the Creation-Tool to tackle this problem. The developed system places the 2D annotations onto a three-dimensional point cloud, captured by depth sensors coupled with cameras around the performance, thus enabling the user to freely visualize the annotated performance three-dimensionally at an arbitrary point of view.

The developed system was later tested in a real environment, namely two separate sessions of RTC involving João Fiadeiro and seven dancers from his company were recorded using both video cameras as well as the Microsoft Kinect. The collected data were used to generate the corresponding point clouds which were analyzed together with the videos. The main criteria used to choose the sketches for annotation was related to the richness of the information provided by the improvisation and related to the choreographer’s work. The corresponding video was annotated using the Creation-Tool. The main annotation types used were text and hyperlink annotations, thus enhancing some concepts underlying the RTC method (e.g Position and Relationship).

 

Video examples

 

3D Annotator Ex1

This video shows a working example of the developed system. we have designed an interface composed of two main panels located at the top and at the bottom of the screen. These locations allow to free the center of the screen, which is where the performance takes place. The top panel is the Annotation navigator panel and it is composed of a main control which affects the remaining three widgets. The main control is the annotation navigator (a) and it allows to advance or go back to a previous annotation in the scene. The bottom panel is the Timeline Control panel, which allows the user to fast-forward or rewind to a desired frame. In this example is also possible to see how text annotation are coupled to the point cloud data.

3D Annotator V1

This video shows the several phases that comprise the data pre-processing stage. Initially, the different viewpoints are calibrated in order to have a cohesive point cloud. Next, the point-cloud is segmented, which in identifying and separating dancers and objects into individual point clouds. Finally, normal vector data is calculated for every detected point cloud.