Tag Archives: uncanny

12. Live Performance

Inspired by the intermedial performance work of Jo Scott, I am beginning to formulate an outline for a series of experimental live performance experiments as a means of testing the hypothesis as to whether it is possible to evoke the uncanny through intermedial performance. Intermedial being used here to highlight the mutually dependent relationship between the performer and the media being used.

Jo Scott improvises with her technology as a means of being present and evoking liveness, giving her the ability to move her performance in any direction at any time, responding to feedback from her system, the generated media and the audience. In comparison the iMorphia system as it currently stands does not support this type of live improvisation, characters are selected via a computer interface to the unity engine and once chosen are fixed.

How might a system be created that supports the type of  live improvisation offered by Jo’ s system?  How might different aspects of the uncanny be evoked and changed rapidly  and easily? What form might the performance take? What does the performance space look like? What is the content and what types of technology might be used to deliver a live interactive uncanny performance?

How does the performance work of Jo Scott compare to other intermedial perfomances – such as the work of Rose English, Forced Entertainment and Forkbeard Fantasy? Are there other examples that might be used to compare and contrast?

I am beginning to imagine a palette of possibilities,  a space where objects, screens and devices can be moved around and changed. An intimate space with one participant, myself as performer/medium and the intermedial technology of interactive multimedia devices, props, screens and projectors – a playful and experimental space where work might be continually created, developed and trialled over a period of a number of weeks.

The envisaged performance will require the development of iMorphia to extend body mapping and interaction in order to address some of the  areas of future research mapped out following the workshops – such as face mapping, live body swapping and a mutual interactive relationship between performer, participant and the technology:3way

Face projection, interactive objects and the heightened inter-relationship between performer and virtual projections are seen as key areas where the uncanny might be evoked.

There will need to be  a balance between content creation and technical developments in order that the research can be contained and released.

 

Face tracking/mapping

Live interactive face mapping is a relatively new phenomena and is incredibly impressive, with suggestions of the uncanny, as the Omote project demonstrates (video August 2014):

Omote used bespoke software written by the artist Nobumich Asai and is highly computer intensive (two systems are used in parallel) and involves complex and labour intensive procedures of special make up and reflective dots for the effect to work correctly.

Producing such an effect may not be possible due to the technical and time limitations of the research,  however there are off-the-shelf alternatives that achieve a less robust and accurate face mapping effect including a face tracking plugin for Unity and a number of webcam based software applications such as the faceshift software.

The ability to change the face is also being added to mobile devices and tools for face to face communication such as skype as the recently launched (December 2014) software Looksery demonstrates:

Alternatively, rather than attempting to create an accurate face tracking system, choreographed actors and crafted content can produce a similar effect:

 

 

 

 

11. The Uncanny, Praxis and Intermediality

I have been reading a very in-depth study of The Uncanny  by Nicholas Royle (reviewed here) and the fascinating Freudian Robot by Lydia H. Liu, exploring relationships between Lacan, Claude Shannon, Cybernetics and The Uncanny.

In January I interviewed the intermedial perfomer Jo Scott who recently completed a practice based Phd at Central and also met with her supervisor Robin Nelson, publisher of the incredibly useful and informative Practice as Research in the Arts.

During the interview we discussed many aspects of practise as research, praxis, performance as research and research as performance, negotiating live performance, impro/comprovisation, and the use of technology as a performative medium.

We also talked about influences and other intermedial artists including Forced Entertainment, Laurie Anderson, artist Gary Hill and theatre company 1927.

One of the points both Jo and Robin made with regard to PaR was that rather than thinking or theorising,  one uses practise as a method for working through a problem. This notion struck a chord with my own struggles with where to go next with iMorphia. Rather than trying to analyse the research  to date and deduce a future direction,  it now feels more appropriate that I should practise my way forward.

The recorded interview has been transcribed will serve as a basis for informing the next phase of the practise based research.

 

Shana Moulton

Last night I witnessed the second performance by New York performer Shana Moulton at Primary in Nottingham who uses projections and live performance as a means of evoking and expressing her alter ego Clair.

shana primary

The image above illustrates how Shana uses projections to create a virtual set in which she performs. Her alter ego is projected onto the electrically operated armchair, which when set to lift by Shana operating a remote control, her alter ego projection rises and floats upwards and escapes through the projected stained glass ceiling.

shana primary2

Shana Moulton’s performative work successfully utilises  video projections to create engaging surreal darkly comic intermedial theatrical performances as the video below illustrates.

 

My New Robot Companion

Anna Dumitriu, director of Unnecessary Research and Alex May exhibit their surreal  and uncanny  Familiar Head at the Nesta Futurefest in March. Their website My New Robot Companion documents a residency in the Department of Computer Science at the University of Hertfordshire.

HARR1
HARR1 – with projected robotic face

There are resonances here – evoking the uncanny through projection, performance, installation and sensing technologies.

Alex has also written a free software tool “Painting with Light” which enable artists to experiment with projection mapping.

IMG_1246-682x1024
Video sculpture using projection mapping  and Painting with Light software exhibited at Kinetica Art Fair 2014

8. Evaluation Workshop

In order to evaluate the effectiveness and to gain critical feedback of ‘iMorphia’ the prototype performance system, sixteen performers took part in a series of workshops which were carried out between the 14th and 18th April 2014 in the Mixed Reality Lab at Nottingham University.

One of the key observations was that content effects human interaction. This was originally posed as a research question in October 2013:

“Can the projected illusion affect the actor such that they feel embodied by the characteristics of the virtual character? ”

An interesting observation was the powerful and often liberating effect of changing the gender of male and female participants, producing comments such as “I feel quite powerful like this” (f->m), “I feel more sensual” (m->f).

All participants when in opposite gender expressed awareness of stereotypes, males not wanting to behave in what they perceived as a stereotypical fashion towards the female character, whilst females in male character seemed to relish the idea of playing with male stereotypes. These reactions reflect a contemporary post feminism society where the act of stereotyping females has strong political issues. A number of males reported how they felt that they had to respect the female character as if it had an independent life.

 One participant likened the effect of changing gender to the medieval ‘Festival of Fools’, where putting on clothes of the opposite gender is a foolish thing to do and gives permission to play the fool and to break rules, which was once regarded as a powerful and liberating thing to be able to do. This sentiment was echoed by a number of participants, that the system gave you freedom and permission to be other, other than ones normal everyday self and removed from people’s expectations of how one is supposed to behave.

In summary the key observations resulting from the workshops were:

i) The effectiveness of body projection in creating a body mask that is sufficiently convincing and effective in creating a suspension of disbelief in both performer and audience.

ii) How system artefacts such as lag and tracking errors were exploited by performers to explore notions of the double and the uncanny.

iii) The affective response of the performer when in character compared to the objective response when viewing the projection as an audience member.

The video below contains short extracts from the four hours of recorded video, with text overlays of comments by the performers.

5. Research Review: Theory and Practice

The recent practical experiments were motivated by the desire to create a transformational experience for a performer (or performers) and their audience using multi-modal technology (projection, live responsive computer generated characters and Kinect body sensing).

A research question might be “Can a projected responsive avatar produce a sense of the uncanny in a performer and/or an audience?”

Classic research requires that this hypothesis be tested and validated, typically through user testing, questions and analysis. Rather than simply testing a hypothesis my personal preference is to discover how other performers react to the system, how it might be further developed and whether it has any value. To this end it is planned that a number of workshops will be held in approximately 8-10 weeks time, after a series of questions and planned scenarios have been developed – a workshop structure.

Meanwhile I do feel that this approach has a limited trajectory, it is not difficult to envisage how a more stable and believable system might be developed, one can imagine scenarios and short scenes of how it might be used. If this were an arts project with an intended public audience then I would be focussing on improving the quality and interactive responses of the system, developing scripts and and developing believable and engaging content.

However this is research, and I am feeling unsure exactly of how to balance theory and practice. Further, I am not entirely clear as to what is an appropriate research methodology given that my work and approach sits somewhere uncomfortably between Art Practice and Computer Science.

My feeling is that the Unity/Kinect model has reached an end and that other techniques need to be explored. If this were purely an arts practice led PhD then I believe that this would be a valid and acceptable mode of enquiry, that new techniques need to be tried without resorting to establishing a hypothesis and then testing it to determine its validity. I refer back now to my research proposal where I examined various research methods especially the Performative Research Manifesto envisaged by Brad Haseman.

Taking its name from J.L. Austin’s speech act theory, performative research stands as an alternative to the qualitative and quantitative paradigms by insisting on different approaches to designing, conducting and reporting research. The paper concludes by observing that once understood and fully theorised, the performative research paradigm will have applications beyond the arts and across the creative and cultural industries generally.

(Haseman 2006).

 

Two new interactive practise driven methodologies I wish to explore are:

1. The use of Augmented Reality as a means of creating invisible props that can respond to the performer.

2. The exploration of the performative use of Virtual Human technologies being developed by the Virtual Humans Research group at the Institute for Creative Technologies USC.

These two methodologies encompass two very different underlying forms of improvisation and interaction – the first seeks to create a space for improvisation, the unexpected and magic, relying more on performer improvisation than system improvisation. The second methodology places more emphasis on system improvisation, where characters have more “life” and the performer has to respond or interact with independent agent based entities.

In order to establish the feasibility of whether a Virtual Human might be used in a performative context I have downloaded the Virtual Human Toolkit which integrates with the Unity framework. The toolkit appears to offer many unusual and interesting capabilities, voice recognition, gaze awareness and the creation of scripts to define responses to user interactions.

4. Kinect and Unity – Semi-realistic characters

The previous post dealt with the generation and acquisition of more realistic human characters suitable for importing into the Unity games engine and controllable by performers via the Kinect plugin. This post features four video demonstrations of the results.

1. Live character projection mapping
Unity asset, demonstrating a character attempting to follow the poses and walking movements of the performer, with variable degrees of success.

 

2. Live MakeHuman character projection mapping
The character is exported from MakeHuman as a Collada (.dae) asset suitable for importing into Unity. The character exhibits a greater degree of realism and may at times be perceived as being somewhat uncanny. The behaviour of the character is limited due to its inability to move horizontally with the performer.

 

3. Live DAZ character projection mapping
The imported semi-realistic human character is a free asset included with the DAZ software, the eyes are incorrectly rendered but this accidentally produces a somewhat eerie effect. The character can be seen to follow the movements of the performer with reasonable coherence, glitches appear when the performer moves too close to the back wall and the Kinect then becomes incapable of tracking the performer correctly.

 

4. Live two character projection mapping
This video is perhaps one of the more interesting in that watching two characters appears to be more interesting and engaging than watching one. We tend to read into the video as if the characters are interacting with each other and having a dialogue. One might imagine they are a couple arguing over something, when in fact the two performers were simply testing the interaction of the system, moving back and forth and independently moving their arms without attempting to convey any meaningful interaction or dialogue.

3. Realism and The Uncanny

One of the criticisms of the original MikuMorphia was that because it looked like a cartoon, it was far from the uncanny. The  Uncanny Valley of Computer Graphics appears to be located somewhere between human and inhuman realism – where a 3D render makes the viewer feel uncomfortable because it is almost convincing as a mimetic version of a human, but something feels or appears not quite right. In order to explore this in-between region I therefore had to look towards acquiring or producing more realistic models of humans.

 

Unity and 3d human-like models

The chosen 3d graphics engine is Unity, and fortunately it is able to import a variety of standard 3d models in a number of formats. Rather than attempting to create a model of a human from scratch I investigated the downloading of free models from a variety of sources including Turbosquid and TF3DM. Many of these models exhibited a reasonable amount of realism, however for the model to work with the Kinect they also need to possess a compatible armature so that the limbs move in response to the actor.

45d

  Character illustrating internal armature

Rigging a 3d model with an armature such that the outer skin moves in a realistic manner is a non trivial task, requiring skills and the use of complex 3d modelling tools such as Maya, 3d StudioMax or Blender. I had high hopes for the automated rigging and provided by the online software Mixamo, however the rigging generated proved incompatible with the Kinect requirements.

The astounding Open Source software package MakeHuman enables the generation of an infinite range of human like figures, with sliders to adjust weight, age, sex, skin colouring and enable the subtle alteration of limb lengths and facial features.

sshot01

This package offers a refreshing alternative to the endless fantasy and game like characters prevalent in human character models on the internet. The generated armature is almost compatible with the kinect requirements such that a figure can mimic the movement of the actor, but due to a lack of one vital element (the root node), the actor has to remain in one position as the virtual character is unable to follow the movement of the actor if they move left or right. I will be investigating the feasibility of adding this additional root node via one of the aforementioned 3d modelling tools.

DAZ3d studio, a free character modelling package does successfully generate characters that possess the correct armature, although the software is free, the characters, clothing and accessories such as hair are all chargeable. However rather than attempting to model characters from scratch this software  with its vast library provides a painless and efficient method of generating semi-realistic human-like characters.

 

Critical Note

I was somewhat surprised by the amount of what can only be termed soft porn available in the form of female 3d models in provocative poses, naked or wearing skimpy clothing; suggesting a strange predominately male user base using the software to create animations and fantasies of a form that objectify women in an adolescent and very non PC manner.

1391327598_fxkht48jrplbvqmA google image search of DAZ3d results in a disturbing overview of the genre.
2014-02-09_181208

 

2. Unity 3D and Kinect tests

Overview
It has been some time since the experimental performance MikuMorphia and the dubious delights of being transformed into a female Japanese anime character. Since then I have cogitated and ruminated on following up the experiment with new work as well as reading up on texts by Sigmund Freud and Ernst Jentsch on the nature of the uncanny, with the view of writing a positional statement on how these ideas relate to my investigations in performance and technology.

In January I moved into a bay in the Mixed Reality Lab and began to develop a more user friendly version of the original experimental performance whereby it would be possible for other people to easily experience the transformation and its subsequent sense of uncanniness without having to don a white skin tight lycra suit. Additionally I wanted to move away from the loaded and restrictive designs of the MikuMiku prefab anime characters. I investigated importing other anime characters and ran a few tests that included the projection of backdrops, but these experiments did not result in breaking any new ground. Further, the MikuMiku software was closed and did not allow the possibilities of getting under the hood to alter the dynamics and interactive capabilities of the software.

MikuMorpha as spectator
Rather than abandoning the MikuMiku experience altogether I carried out some basic “user testing” with a few willing volunteers in the MR lab. Rather than having to undress and squeeze into a tight lycra body suit, participants don a white boiler suit over their normal clothes, This does not produce an ideal body surface for projection being a rather baggy outfit with creases and folds, but enables people to easily try out the experience.
Observing participants trying out the MikuMiku transformation as a spectator rather than a performer made clear to me that watching the illusion and the behaviour of a participant is a very different experience from being immersed in it as a performer.
The subjective experience of seeing one self as other is completely different from objectively watching a participant – the sense of the uncanny as a spectator appears to be lost.

Rachel Jacobs, an artist and performer likened the experience to having the performers internal vision of their performance character visually made explicit, rather than internalised and visualised “in the minds eye”. The concept of the performers character visualisation being made explicit through the visual feedback of the projected image is one that deserves further investigation with other performers who are experienced in the concept of character visualisation.

Video of Rachel experiencing the MikuMiku effect:

Unity 3D
My first choice of an alternative to MikuMiku is the games engine Unity 3D which enables bespoke coding, has plugins for the Kinect and an asset store enabling characters, demos and scripts to be downloaded and modded. In addition the Unity Community with its forums and experts provide a platform for problem solving and include examples of a wide range of experimental work using the Kinect.

Over the last few days, with support from fellow MRL PhD student Dimitrios, I experimented with various Kinetic interfaces and drivers of differing and incompatible versions. The original drivers that enabled MikuMiku to work with the Kinect used old version of OpenNI (1.0.0.0) and Nite, with special non-Microsoft Kinect drivers. The Unity examples used later versions of drivers and OpenNI that were incompatible with MikuMiku which meant that I had to abandon running MikuMiku on the one machine. I managed to get a Unity demo running using OpenNI2.0, but in this version the T-pose which I used to calibrate the figure and the projection was no longer supported, calibration was automatic as soon as you entered the performance space, resulting in the projected figure not being co-located on the body.

Technical issues are tedious, frustrating, time consuming and an unavoidable element of using technology as a creative medium.

Yesterday, I produced a number of new tests using Unity and the Microsoft Kinect SDK, which offers options in Unity to control the calibration, automatic or activated by a selecting a specific pose.

Below are three examples of these experiments, illustrating the somewhat more realistic human like avatars as opposed to the cartoon anime figures of MikuMiku.:

Male Avatar:

Female Avatar:

Male Avatar, performer without head mask:

This last video exhibits a touch of the uncanny where the human face of the performer alternatively blends and dislocates with the face of the projected avatar, the human and the artificial other being simultaneously juxtaposed.