Dynamic Annotation Vizualizer

Annotating body movements in gesture research is a recurring task for researchers which has already motivated the development of several specialized tools, such as ELAN (Brugman & Russel 2004) and ANVIL (Kipp 2012). However, visualising different annotated streams in order to draw conclusions from a certain study is still a technological problem.

In order to fill in this gap, we have developed a Visualisation tool on Unity3D (provisionally called "DynaVis"), which allows gesture researchers to export their annotated data in a comma separated value format (.csv) from video annotation applications such as the ones mentioned above, and to visualise all different recording streams in a more “user friendly” way. Participants are represented as simplified avatars, and each annotation has also a symbolic representation on the virtual simulation. The user can disable and enable different types of annotations to eliminate visual noise, and focus only on certain body parts, participants, or actions.

Moreover, Gaze annotations are visualised in a much clearer way than in simple 2D videos, where gaze can only be tracked with one participant at a time. Contrary to this, our tool allows multiparty gaze annotations to be viewed as lines, which allow the user to clearly track each one of them.

We have also created a 3D version of the visualiser, where one can easily navigate through the environment to take the perspective of one of the participants, which is also not possible when using an ordinary annotated video.

 

Video examples

Dynamic Annotation Vizualizer

This video shows an example of the tool functioning on a "Floor-Study" with 6 participants playing the "CTR game" with João Fiadeiro.