This event is endorsed
and organized by

5th International Conference on Intelligent Technologies for Interactive Entertainment

July 3–5, 2013 | Mons, Belgium

CUTE Workshop

 

Initiated by the NUMEDIART institute, the CUTE workshop intends to gather culture and cutting edge technology. A full day (2nd of July 2013) hands on master class is proposed with no additional cost.
Location: 31, boulevard Dolez
 

 

 

The 4 exciting tracks of the master class are listed bellow :

 

  • Sense & Perform
    You will be able to put hands on the use of sensors for interactive performances or musical virtual instruments. Beside controlling musical processes, sensors can also be used to control other elements of an interactive performance.

 

Behaviour tracking and advanced MOCAP

In this class you will discover how to track and analyse people motion and behaviour from body to eyes, including facial mocap.

Body motion capture:

Body motion capture is mainly used in the field of character animations. You will learn about existing technologies to record and analyse human body movement from people who contributed to set on fire the singers of Cold Love.

  • Do you want to know about the hardware and calibration procedures, 3D Character models, animation data formats and signal processing for animation ?
  • Do you want to animate a virtual character ?
  • Do you want to test our inertial motion capture system ? The singers of Cold Love already did it : https://vimeo.com/34948090

You will be able to use our IGS-190 Animazoo mocup suit.
back to top

Face motion capture:

After body animation, you can go deeper into facial animation through the OptiTrack device.

  • Do you want to know how to setup the hardware for a 7 cameras configuration ?
  • Do you want to know how to extract the information from face tracking ( marker placement, camera calibration, face template) ?
  • Do you want to achieve real-time performance by streaming data to MotionBuilder?
    back to top

Eye tracking:

Eye movements are a window towards the brain and our cognitive processes. Based on the the experience of the Attention Group of the NumediArt Institute , you will be able to understand how to measure, but also how to predict eye movements.

  • Do you want to know how to calibrate an eye-tracking device?
  • Do you want to know how to make valid eye-tracking tests, extract fixations, make eye movement analysis?
  • Do you want to know how to predict human gaze fixations and use attention prediction algorithms?

You will discover eye-tracking through our FaceLab 5 device, and will be able to use our codes to predict human attention.

The instructors of this track are:

  • Dr. Matei Mancas, Numediart project leader
  • Dr. Joelle Tilmanne, Numediart project leader
  • Thierry Ravet, Numediart project leader
  • Nicolas Riche
  • Huseyin Cakmak

back to top

Hands on Performative Speech Synthesis with MAGE

MAGE website MAGE is the first platform for reactive programming of speech synthesis https://www.numediart.org/projects/15-3-mage-phts-covop/ and recently released version 2.0. You will be able to modify speech expressivity in real time and put your hands on the MAGE platform.

  • Do you want to share reactive performances ideas ?
  • Do you want to learn more about MAGE library ?
  • Do you want to discuss the design and spend time with instructors to eventually sketch a first software prototype ?

Speech is the richest and most ubiquitous modality of communication used by human beings. Through vocal expression and conversation, we realize a complex process, highly interactive and social. Ten years ago, the trend of creating expressive or emotional speech brought researchers to realize that such properties were not only a matter of sound quality. Expressivity in speech is contextual, interactive, social, coming in response of other ongoing processes, reaching across most of other human being modalities, and therefore other disciplines. As these new trends in understanding expressivity in speech are being explored, one might notice that a real solid platform is missing. Indeed Text-To-Speech (TTS), as a platform, has only been tackling a subset of these problems. Most of existing TTS systems require a significant amount of text in advance ( typically a sentence ) and process them into sound as a whole target. Most of the time, the ability to influence the synthesis process has been limited, completely disabled or is discouraging, as the resulting sound quality quickly degrades. If we consider that expressivity is related to the ability to interact with the artificial speech production process at various production levels and time scales, as it would happen with real speech, then the requirements for such a platform are different : we need a so-called reactive programming architecture, applied to speech synthesis. To our current knowledge, MAGE is the first attempt towards reactive expressive synthesis.

MAGE website

The instructors of this track are:

  • Dr. Nicolas d'Alessandro, Numediart project leader
  • Maria Astrinaki

back to top


Smart Room

In this class you will discover how to use the Kinect sensor to build advanced explicit and implicit interaction in a real smart environment: the TV setup.

Kinect-based explicit interaction:

The Minority Report movie was the first to show to a large public hand-based natural interactions. You will be able to install and use our hand-gesture interaction tool https://vimeo.com/49277396.

  • Do you want to know how to optimize a hand-based gesture interaction ?
  • Do you want to run and test our gesture application ?
  • Do you want to learn how to easily map hand gestures on a TV interface ?
    back to top

Kinect-based implicit interaction:

Non-verbal communication provides a much more important amount of information than one would think. This module will provide cues about implicit communication and apply it in our smart TV room: https://tinyurl.com/bxo7gkt.

  • Do you want to know which non-verbal cues can be extracted by using a simple Kinect sensor ?
  • Do you want to test our interest extraction module in the TV setup ?

All those modules will be applied to a real typical living room TV setup.

The instructors of this track are:

  • Francois Zajega, Digital Artist, Numediart project leader
  • Julien Leroy, Numediart project leader
  • Francois Rocca, Numediart project leader

back to top


Sense & Perform

You will be able to put hands on the use of sensors for interactive performances or musical virtual instruments. Beside controlling musical processes, sensors can also be used to control other elements of an interactive performance, for example the control of video parameters by a cellist wearing the sensors or the control of fire projectors or whatever other media you would like to control.

  • Do you want to see some musical applications of our wireless MARG (Magnetic, Angular Rate and Gravity) sensor system developed at Numediart, controlling virtual musical instruments with gestures in various contexts: concert, studio or dance performance ?
  • Do you want to know how to connect master, nodes and additional sensors to the MARG ?
  • Do you want to know how data can be received, using a WiFi access point or setting up an ad hoc network ?
  • Do you want to learn how to decode and transform the received data in messages using an external object inside the Max/MSP/Jitter programming language or, alternatively, how a stand-alone application can transform the data into OSC messages that can be used by a whole range of applications ?
  • Do you want to discover how we can use the bidirectional communication protocol to fine tune sensor parameters from the host computer (sensitivity of each individual sensor, sampling rate, external analog inputs, …) ?
  • Do you want to learn how to achieve mapping that allow precise and expressive control directly from accelerometer and gyroscope data, or how to use the orientation of the sensors expressed in quaternions, obtained through fusion of MARG data.
  • Do you want to know how to use the external objects for interpolation and gesture recognition we programmed in Max/MSP ?

The central element of the sensor system consists of a WiFi master module equipped with a combination 3-axis accelerometer, gyroscope and magnetometer. It can be expanded by adding small and light sensor nodes, equipped with the same sensors, and connected to the master through a wired digital bus. Several analog inputs on both master and nodes allow to attach a variety of other sensors https://www.numediart.org/projects/14-3-orchestra.

The instructors of this track are:

  • Todor Todoroff, Digital Artist, Numediart project leader
  • Loic Reboursiere, Digital Artist, Numediart project leader

back to top