João Fiadeiro
First Case Study: October 2014 to March 2016 More

Introduction: Composition in Real Time method

Performance Studies & Information Vizualisation

Multimodal Communication & Cognition

Computer Vision & Motion Tracking

Rui Lopes Graça
Second Case Study: May 2016 to October 2017 More

Dynamic Annotation Vizualizer

Annotating body movements in gesture research is a recurring task for researchers which has already motivated the development of several specialized tools, such as ELAN (Brugman & Russel 2004) and ANVIL (Kipp 2012). However, visualizing different annotated streams in order to draw conclusions from a certain study is still a technological problem.

In order to fill in this gap, we have developed a Visualization tool on Unity3D (provisionally called "DynaVis"), which allows gesture researchers to export their annotated data in a comma separated value format (.csv) from video annotation applications such as the ones mentioned above, and to visualize all different recording streams in a more “user friendly” way. Participants are represented as simplified avatars, and each annotation has also a symbolic representation on the virtual simulation. The user can disable and enable different types of annotations to eliminate visual noise, and focus only on certain body parts, participants, or actions.

Moreover, Gaze annotations are visualised in a much clearer way than in simple 2D videos, where gaze can only be tracked with one participant at a time. Contrary to this, our tool allows multiparty gaze annotations to be viewed as lines, which allow the user to clearly track each one of them.

We have also created a 3D version of the visualizer, where one can easily navigate through the environment to take the perspective of one of the participants, which is also not possible when using an ordinary annotated video.


Video examples

Dynamic Annotation Vizualizer

This video shows an example of the tool functioning on a "Floor-Study" with 6 participants playing the "CTR game" with João Fiadeiro.