Tag Archives: uncanny

12. Live Performance

Inspired by the intermedial performance work of Jo Scott, I am beginning to formulate an outline for a series of experimental live performance experiments as a means of testing the hypothesis as to whether it is possible to evoke the uncanny through intermedial performance. Intermedial being used here to highlight the mutually dependent relationship between the performer and the media being used.

Jo Scott improvises with her technology as a means of being present and evoking liveness, giving her the ability to move her performance in any direction at any time, responding to feedback from her system, the generated media and the audience. In comparison the iMorphia system as it currently stands does not support this type of live improvisation, characters are selected via a computer interface to the unity engine and once chosen are fixed.

How might a system be created that supports the type of  live improvisation offered by Jo’ s system?  How might different aspects of the uncanny be evoked and changed rapidly  and easily? What form might the performance take? What does the performance space look like? What is the content and what types of technology might be used to deliver a live interactive uncanny performance?

How does the performance work of Jo Scott compare to other intermedial perfomances – such as the work of Rose English, Forced Entertainment and Forkbeard Fantasy? Are there other examples that might be used to compare and contrast?

I am beginning to imagine a palette of possibilities,  a space where objects, screens and devices can be moved around and changed. An intimate space with one participant, myself as performer/medium and the intermedial technology of interactive multimedia devices, props, screens and projectors – a playful and experimental space where work might be continually created, developed and trialled over a period of a number of weeks.

The envisaged performance will require the development of iMorphia to extend body mapping and interaction in order to address some of the  areas of future research mapped out following the workshops – such as face mapping, live body swapping and a mutual interactive relationship between performer, participant and the technology:3way

Face projection, interactive objects and the heightened inter-relationship between performer and virtual projections are seen as key areas where the uncanny might be evoked.

There will need to be  a balance between content creation and technical developments in order that the research can be contained and released.

 

Face tracking/mapping

Live interactive face mapping is a relatively new phenomena and is incredibly impressive, with suggestions of the uncanny, as the Omote project demonstrates (video August 2014):

Omote used bespoke software written by the artist Nobumich Asai and is highly computer intensive (two systems are used in parallel) and involves complex and labour intensive procedures of special make up and reflective dots for the effect to work correctly.

Producing such an effect may not be possible due to the technical and time limitations of the research,  however there are off-the-shelf alternatives that achieve a less robust and accurate face mapping effect including a face tracking plugin for Unity and a number of webcam based software applications such as the faceshift software.

The ability to change the face is also being added to mobile devices and tools for face to face communication such as skype as the recently launched (December 2014) software Looksery demonstrates:

Alternatively, rather than attempting to create an accurate face tracking system, choreographed actors and crafted content can produce a similar effect:

 

 

 

 

11. The Uncanny, Praxis and Intermediality

I have been reading a very in-depth study of The Uncanny  by Nicholas Royle (reviewed here) and the fascinating Freudian Robot by Lydia H. Liu, exploring relationships between Lacan, Claude Shannon, Cybernetics and The Uncanny.

In January I interviewed the intermedial perfomer Jo Scott who recently completed a practice based Phd at Central and also met with her supervisor Robin Nelson, publisher of the incredibly useful and informative Practice as Research in the Arts.

During the interview we discussed many aspects of practise as research, praxis, performance as research and research as performance, negotiating live performance, impro/comprovisation, and the use of technology as a performative medium.

We also talked about influences and other intermedial artists including Forced Entertainment, Laurie Anderson, artist Gary Hill and theatre company 1927.

One of the points both Jo and Robin made with regard to PaR was that rather than thinking or theorising,  one uses practise as a method for working through a problem. This notion struck a chord with my own struggles with where to go next with iMorphia. Rather than trying to analyse the research  to date and deduce a future direction,  it now feels more appropriate that I should practise my way forward.

The recorded interview has been transcribed will serve as a basis for informing the next phase of the practise based research.

 

Shana Moulton

Last night I witnessed the second performance by New York performer Shana Moulton at Primary in Nottingham who uses projections and live performance as a means of evoking and expressing her alter ego Clair.

shana primary

The image above illustrates how Shana uses projections to create a virtual set in which she performs. Her alter ego is projected onto the electrically operated armchair, which when set to lift by Shana operating a remote control, her alter ego projection rises and floats upwards and escapes through the projected stained glass ceiling.

shana primary2

Shana Moulton’s performative work successfully utilises  video projections to create engaging surreal darkly comic intermedial theatrical performances as the video below illustrates.

 

My New Robot Companion

Anna Dumitriu, director of Unnecessary Research and Alex May exhibit their surreal  and uncanny  Familiar Head at the Nesta Futurefest in March. Their website My New Robot Companion documents a residency in the Department of Computer Science at the University of Hertfordshire.

HARR1
HARR1 – with projected robotic face

There are resonances here – evoking the uncanny through projection, performance, installation and sensing technologies.

Alex has also written a free software tool “Painting with Light” which enable artists to experiment with projection mapping.

IMG_1246-682x1024
Video sculpture using projection mapping  and Painting with Light software exhibited at Kinetica Art Fair 2014

5. Research Review: Theory and Practice

The recent practical experiments were motivated by the desire to create a transformational experience for a performer (or performers) and their audience using multi-modal technology (projection, live responsive computer generated characters and Kinect body sensing).

A research question might be “Can a projected responsive avatar produce a sense of the uncanny in a performer and/or an audience?”

Classic research requires that this hypothesis be tested and validated, typically through user testing, questions and analysis. Rather than simply testing a hypothesis my personal preference is to discover how other performers react to the system, how it might be further developed and whether it has any value. To this end it is planned that a number of workshops will be held in approximately 8-10 weeks time, after a series of questions and planned scenarios have been developed – a workshop structure.

Meanwhile I do feel that this approach has a limited trajectory, it is not difficult to envisage how a more stable and believable system might be developed, one can imagine scenarios and short scenes of how it might be used. If this were an arts project with an intended public audience then I would be focussing on improving the quality and interactive responses of the system, developing scripts and and developing believable and engaging content.

However this is research, and I am feeling unsure exactly of how to balance theory and practice. Further, I am not entirely clear as to what is an appropriate research methodology given that my work and approach sits somewhere uncomfortably between Art Practice and Computer Science.

My feeling is that the Unity/Kinect model has reached an end and that other techniques need to be explored. If this were purely an arts practice led PhD then I believe that this would be a valid and acceptable mode of enquiry, that new techniques need to be tried without resorting to establishing a hypothesis and then testing it to determine its validity. I refer back now to my research proposal where I examined various research methods especially the Performative Research Manifesto envisaged by Brad Haseman.

Taking its name from J.L. Austin’s speech act theory, performative research stands as an alternative to the qualitative and quantitative paradigms by insisting on different approaches to designing, conducting and reporting research. The paper concludes by observing that once understood and fully theorised, the performative research paradigm will have applications beyond the arts and across the creative and cultural industries generally.

(Haseman 2006).

 

Two new interactive practise driven methodologies I wish to explore are:

1. The use of Augmented Reality as a means of creating invisible props that can respond to the performer.

2. The exploration of the performative use of Virtual Human technologies being developed by the Virtual Humans Research group at the Institute for Creative Technologies USC.

These two methodologies encompass two very different underlying forms of improvisation and interaction – the first seeks to create a space for improvisation, the unexpected and magic, relying more on performer improvisation than system improvisation. The second methodology places more emphasis on system improvisation, where characters have more “life” and the performer has to respond or interact with independent agent based entities.

In order to establish the feasibility of whether a Virtual Human might be used in a performative context I have downloaded the Virtual Human Toolkit which integrates with the Unity framework. The toolkit appears to offer many unusual and interesting capabilities, voice recognition, gaze awareness and the creation of scripts to define responses to user interactions.

4. Kinect and Unity – Semi-realistic characters

The previous post dealt with the generation and acquisition of more realistic human characters suitable for importing into the Unity games engine and controllable by performers via the Kinect plugin. This post features four video demonstrations of the results.

1. Live character projection mapping
Unity asset, demonstrating a character attempting to follow the poses and walking movements of the performer, with variable degrees of success.

 

2. Live MakeHuman character projection mapping
The character is exported from MakeHuman as a Collada (.dae) asset suitable for importing into Unity. The character exhibits a greater degree of realism and may at times be perceived as being somewhat uncanny. The behaviour of the character is limited due to its inability to move horizontally with the performer.

 

3. Live DAZ character projection mapping
The imported semi-realistic human character is a free asset included with the DAZ software, the eyes are incorrectly rendered but this accidentally produces a somewhat eerie effect. The character can be seen to follow the movements of the performer with reasonable coherence, glitches appear when the performer moves too close to the back wall and the Kinect then becomes incapable of tracking the performer correctly.

 

4. Live two character projection mapping
This video is perhaps one of the more interesting in that watching two characters appears to be more interesting and engaging than watching one. We tend to read into the video as if the characters are interacting with each other and having a dialogue. One might imagine they are a couple arguing over something, when in fact the two performers were simply testing the interaction of the system, moving back and forth and independently moving their arms without attempting to convey any meaningful interaction or dialogue.

3. Realism and The Uncanny

One of the criticisms of the original MikuMorphia was that because it looked like a cartoon, it was far from the uncanny. The  Uncanny Valley of Computer Graphics appears to be located somewhere between human and inhuman realism – where a 3D render makes the viewer feel uncomfortable because it is almost convincing as a mimetic version of a human, but something feels or appears not quite right. In order to explore this in-between region I therefore had to look towards acquiring or producing more realistic models of humans.

 

Unity and 3d human-like models

The chosen 3d graphics engine is Unity, and fortunately it is able to import a variety of standard 3d models in a number of formats. Rather than attempting to create a model of a human from scratch I investigated the downloading of free models from a variety of sources including Turbosquid and TF3DM. Many of these models exhibited a reasonable amount of realism, however for the model to work with the Kinect they also need to possess a compatible armature so that the limbs move in response to the actor.

45d

  Character illustrating internal armature

Rigging a 3d model with an armature such that the outer skin moves in a realistic manner is a non trivial task, requiring skills and the use of complex 3d modelling tools such as Maya, 3d StudioMax or Blender. I had high hopes for the automated rigging and provided by the online software Mixamo, however the rigging generated proved incompatible with the Kinect requirements.

The astounding Open Source software package MakeHuman enables the generation of an infinite range of human like figures, with sliders to adjust weight, age, sex, skin colouring and enable the subtle alteration of limb lengths and facial features.

sshot01

This package offers a refreshing alternative to the endless fantasy and game like characters prevalent in human character models on the internet. The generated armature is almost compatible with the kinect requirements such that a figure can mimic the movement of the actor, but due to a lack of one vital element (the root node), the actor has to remain in one position as the virtual character is unable to follow the movement of the actor if they move left or right. I will be investigating the feasibility of adding this additional root node via one of the aforementioned 3d modelling tools.

DAZ3d studio, a free character modelling package does successfully generate characters that possess the correct armature, although the software is free, the characters, clothing and accessories such as hair are all chargeable. However rather than attempting to model characters from scratch this software  with its vast library provides a painless and efficient method of generating semi-realistic human-like characters.

 

Critical Note

I was somewhat surprised by the amount of what can only be termed soft porn available in the form of female 3d models in provocative poses, naked or wearing skimpy clothing; suggesting a strange predominately male user base using the software to create animations and fantasies of a form that objectify women in an adolescent and very non PC manner.

1391327598_fxkht48jrplbvqmA google image search of DAZ3d results in a disturbing overview of the genre.
2014-02-09_181208

 

1. MikuMorphia

This is recording of an experimental live performance where a gesture responsive MikMiku Japanese animé dance figure is projected onto the body of the performer and at the same time the video of this projected body image is seen by the performer through a pair of video glasses.

Observations
The resultant effect is that of a simultaneous Other and The Double – The Double resulting from the simultaneous co-existence of the male body superimposed and transformed by the projected female Other.

The immersive effect of seeing the body transformed into a female Other had a strange uncanny effect on myself as the performer in that I began to play and adopt the movements and characteristics accorded to the projected Other. For example, the physical characteristics of the avatar’s long hair encouraged movements that resulted in greater expressions of flowing hair.

The visual feedback of an alternative body through projection has a transforming effect on the performers behaviour, creating a sense of immersion into an “alter body”.

The documentation is a recording and further tests need to be done to determine whether witnessing a live projection can convey the same sense of the uncanny. It has been pointed out that the video recording of a solitary act of performance constitutes an equally valid form of mediated performance and is associated with notions of voyeurism and secrecy that would not be present if performed live.

Further work
It is envisaged that alternative avatars and backdrop scenes will be created using Unity 3D to explore the effects and potentials of other characterisations. These might include  archetypes from Fairy Tales (old man, wizard,  prince, old lady, witch, princess, monster) or classic gaming characters such as Lara Croft and the Prince of Persia.

Technical details:
Hardware: Microsoft Kinect, Vuzix Video Glasses,
I5 Windows PC, video projector, video camera.
Software:  MikuMiku with OpenNI plugin.

System Diagram:

mikusystem