The previous post dealt with the generation and acquisition of more realistic human characters suitable for importing into the Unity games engine and controllable by performers via the Kinect plugin. This post features four video demonstrations of the results.
1. Live character projection mapping
Unity asset, demonstrating a character attempting to follow the poses and walking movements of the performer, with variable degrees of success.
2. Live MakeHuman character projection mapping
The character is exported from MakeHuman as a Collada (.dae) asset suitable for importing into Unity. The character exhibits a greater degree of realism and may at times be perceived as being somewhat uncanny. The behaviour of the character is limited due to its inability to move horizontally with the performer.
3. Live DAZ character projection mapping The imported semi-realistic human character is a free asset included with the DAZ software, the eyes are incorrectly rendered but this accidentally produces a somewhat eerie effect. The character can be seen to follow the movements of the performer with reasonable coherence, glitches appear when the performer moves too close to the back wall and the Kinect then becomes incapable of tracking the performer correctly.
4. Live two character projection mapping This video is perhaps one of the more interesting in that watching two characters appears to be more interesting and engaging than watching one. We tend to read into the video as if the characters are interacting with each other and having a dialogue. One might imagine they are a couple arguing over something, when in fact the two performers were simply testing the interaction of the system, moving back and forth and independently moving their arms without attempting to convey any meaningful interaction or dialogue.
One of the criticisms of the original MikuMorphia was that because it looked like a cartoon, it was far from the uncanny. The Uncanny Valley of Computer Graphics appears to be located somewhere between human and inhuman realism – where a 3D render makes the viewer feel uncomfortable because it is almost convincing as a mimetic version of a human, but something feels or appears not quite right. In order to explore this in-between region I therefore had to look towards acquiring or producing more realistic models of humans.
Unity and 3d human-like models
The chosen 3d graphics engine is Unity, and fortunately it is able to import a variety of standard 3d models in a number of formats. Rather than attempting to create a model of a human from scratch I investigated the downloading of free models from a variety of sources including Turbosquid and TF3DM. Many of these models exhibited a reasonable amount of realism, however for the model to work with the Kinect they also need to possess a compatible armature so that the limbs move in response to the actor.
Character illustrating internal armature
Rigging a 3d model with an armature such that the outer skin moves in a realistic manner is a non trivial task, requiring skills and the use of complex 3d modelling tools such as Maya, 3d StudioMax or Blender. I had high hopes for the automated rigging and provided by the online software Mixamo, however the rigging generated proved incompatible with the Kinect requirements.
The astounding Open Source software package MakeHuman enables the generation of an infinite range of human like figures, with sliders to adjust weight, age, sex, skin colouring and enable the subtle alteration of limb lengths and facial features.
This package offers a refreshing alternative to the endless fantasy and game like characters prevalent in human character models on the internet. The generated armature is almost compatible with the kinect requirements such that a figure can mimic the movement of the actor, but due to a lack of one vital element (the root node), the actor has to remain in one position as the virtual character is unable to follow the movement of the actor if they move left or right. I will be investigating the feasibility of adding this additional root node via one of the aforementioned 3d modelling tools.
DAZ3d studio, a free character modelling package does successfully generate characters that possess the correct armature, although the software is free, the characters, clothing and accessories such as hair are all chargeable. However rather than attempting to model characters from scratch this software with its vast library provides a painless and efficient method of generating semi-realistic human-like characters.
Critical Note
I was somewhat surprised by the amount of what can only be termed soft porn available in the form of female 3d models in provocative poses, naked or wearing skimpy clothing; suggesting a strange predominately male user base using the software to create animations and fantasies of a form that objectify women in an adolescent and very non PC manner.
A google image search of DAZ3d results in a disturbing overview of the genre.
Overview
It has been some time since the experimental performance MikuMorphia and the dubious delights of being transformed into a female Japanese anime character. Since then I have cogitated and ruminated on following up the experiment with new work as well as reading up on texts by Sigmund Freud and Ernst Jentsch on the nature of the uncanny, with the view of writing a positional statement on how these ideas relate to my investigations in performance and technology.
In January I moved into a bay in the Mixed Reality Lab and began to develop a more user friendly version of the original experimental performance whereby it would be possible for other people to easily experience the transformation and its subsequent sense of uncanniness without having to don a white skin tight lycra suit. Additionally I wanted to move away from the loaded and restrictive designs of the MikuMiku prefab anime characters. I investigated importing other anime characters and ran a few tests that included the projection of backdrops, but these experiments did not result in breaking any new ground. Further, the MikuMiku software was closed and did not allow the possibilities of getting under the hood to alter the dynamics and interactive capabilities of the software.
MikuMorpha as spectator
Rather than abandoning the MikuMiku experience altogether I carried out some basic “user testing” with a few willing volunteers in the MR lab. Rather than having to undress and squeeze into a tight lycra body suit, participants don a white boiler suit over their normal clothes, This does not produce an ideal body surface for projection being a rather baggy outfit with creases and folds, but enables people to easily try out the experience.
Observing participants trying out the MikuMiku transformation as a spectator rather than a performer made clear to me that watching the illusion and the behaviour of a participant is a very different experience from being immersed in it as a performer.
The subjective experience of seeing one self as other is completely different from objectively watching a participant – the sense of the uncanny as a spectator appears to be lost.
Rachel Jacobs, an artist and performer likened the experience to having the performers internal vision of their performance character visually made explicit, rather than internalised and visualised “in the minds eye”. The concept of the performers character visualisation being made explicit through the visual feedback of the projected image is one that deserves further investigation with other performers who are experienced in the concept of character visualisation.
Video of Rachel experiencing the MikuMiku effect:
Unity 3D
My first choice of an alternative to MikuMiku is the games engine Unity 3D which enables bespoke coding, has plugins for the Kinect and an asset store enabling characters, demos and scripts to be downloaded and modded. In addition the Unity Community with its forums and experts provide a platform for problem solving and include examples of a wide range of experimental work using the Kinect.
Over the last few days, with support from fellow MRL PhD student Dimitrios, I experimented with various Kinetic interfaces and drivers of differing and incompatible versions. The original drivers that enabled MikuMiku to work with the Kinect used old version of OpenNI (1.0.0.0) and Nite, with special non-Microsoft Kinect drivers. The Unity examples used later versions of drivers and OpenNI that were incompatible with MikuMiku which meant that I had to abandon running MikuMiku on the one machine. I managed to get a Unity demo running using OpenNI2.0, but in this version the T-pose which I used to calibrate the figure and the projection was no longer supported, calibration was automatic as soon as you entered the performance space, resulting in the projected figure not being co-located on the body.
Technical issues are tedious, frustrating, time consuming and an unavoidable element of using technology as a creative medium.
Yesterday, I produced a number of new tests using Unity and the Microsoft Kinect SDK, which offers options in Unity to control the calibration, automatic or activated by a selecting a specific pose.
Below are three examples of these experiments, illustrating the somewhat more realistic human like avatars as opposed to the cartoon anime figures of MikuMiku.:
Male Avatar:
Female Avatar:
Male Avatar, performer without head mask:
This last video exhibits a touch of the uncanny where the human face of the performer alternatively blends and dislocates with the face of the projected avatar, the human and the artificial other being simultaneously juxtaposed.
Performative Interaction and Embodiment on an Augmented Stage