Category Archives: Research Blog

12. Live Performance

Inspired by the intermedial performance work of Jo Scott, I am beginning to formulate an outline for a series of experimental live performance experiments as a means of testing the hypothesis as to whether it is possible to evoke the uncanny through intermedial performance. Intermedial being used here to highlight the mutually dependent relationship between the performer and the media being used.

Jo Scott improvises with her technology as a means of being present and evoking liveness, giving her the ability to move her performance in any direction at any time, responding to feedback from her system, the generated media and the audience. In comparison the iMorphia system as it currently stands does not support this type of live improvisation, characters are selected via a computer interface to the unity engine and once chosen are fixed.

How might a system be created that supports the type of  live improvisation offered by Jo’ s system?  How might different aspects of the uncanny be evoked and changed rapidly  and easily? What form might the performance take? What does the performance space look like? What is the content and what types of technology might be used to deliver a live interactive uncanny performance?

How does the performance work of Jo Scott compare to other intermedial perfomances – such as the work of Rose English, Forced Entertainment and Forkbeard Fantasy? Are there other examples that might be used to compare and contrast?

I am beginning to imagine a palette of possibilities,  a space where objects, screens and devices can be moved around and changed. An intimate space with one participant, myself as performer/medium and the intermedial technology of interactive multimedia devices, props, screens and projectors – a playful and experimental space where work might be continually created, developed and trialled over a period of a number of weeks.

The envisaged performance will require the development of iMorphia to extend body mapping and interaction in order to address some of the  areas of future research mapped out following the workshops – such as face mapping, live body swapping and a mutual interactive relationship between performer, participant and the technology:3way

Face projection, interactive objects and the heightened inter-relationship between performer and virtual projections are seen as key areas where the uncanny might be evoked.

There will need to be  a balance between content creation and technical developments in order that the research can be contained and released.


Face tracking/mapping

Live interactive face mapping is a relatively new phenomena and is incredibly impressive, with suggestions of the uncanny, as the Omote project demonstrates (video August 2014):

Omote used bespoke software written by the artist Nobumich Asai and is highly computer intensive (two systems are used in parallel) and involves complex and labour intensive procedures of special make up and reflective dots for the effect to work correctly.

Producing such an effect may not be possible due to the technical and time limitations of the research,  however there are off-the-shelf alternatives that achieve a less robust and accurate face mapping effect including a face tracking plugin for Unity and a number of webcam based software applications such as the faceshift software.

The ability to change the face is also being added to mobile devices and tools for face to face communication such as skype as the recently launched (December 2014) software Looksery demonstrates:

Alternatively, rather than attempting to create an accurate face tracking system, choreographed actors and crafted content can produce a similar effect:





11. The Uncanny, Praxis and Intermediality

I have been reading a very in-depth study of The Uncanny  by Nicholas Royle (reviewed here) and the fascinating Freudian Robot by Lydia H. Liu, exploring relationships between Lacan, Claude Shannon, Cybernetics and The Uncanny.

In January I interviewed the intermedial perfomer Jo Scott who recently completed a practice based Phd at Central and also met with her supervisor Robin Nelson, publisher of the incredibly useful and informative Practice as Research in the Arts.

During the interview we discussed many aspects of practise as research, praxis, performance as research and research as performance, negotiating live performance, impro/comprovisation, and the use of technology as a performative medium.

We also talked about influences and other intermedial artists including Forced Entertainment, Laurie Anderson, artist Gary Hill and theatre company 1927.

One of the points both Jo and Robin made with regard to PaR was that rather than thinking or theorising,  one uses practise as a method for working through a problem. This notion struck a chord with my own struggles with where to go next with iMorphia. Rather than trying to analyse the research  to date and deduce a future direction,  it now feels more appropriate that I should practise my way forward.

The recorded interview has been transcribed will serve as a basis for informing the next phase of the practise based research.


Shana Moulton

Last night I witnessed the second performance by New York performer Shana Moulton at Primary in Nottingham who uses projections and live performance as a means of evoking and expressing her alter ego Clair.

shana primary

The image above illustrates how Shana uses projections to create a virtual set in which she performs. Her alter ego is projected onto the electrically operated armchair, which when set to lift by Shana operating a remote control, her alter ego projection rises and floats upwards and escapes through the projected stained glass ceiling.

shana primary2

Shana Moulton’s performative work successfully utilises  video projections to create engaging surreal darkly comic intermedial theatrical performances as the video below illustrates.


My New Robot Companion

Anna Dumitriu, director of Unnecessary Research and Alex May exhibit their surreal  and uncanny  Familiar Head at the Nesta Futurefest in March. Their website My New Robot Companion documents a residency in the Department of Computer Science at the University of Hertfordshire.

HARR1 – with projected robotic face

There are resonances here – evoking the uncanny through projection, performance, installation and sensing technologies.

Alex has also written a free software tool “Painting with Light” which enable artists to experiment with projection mapping.

Video sculpture using projection mapping  and Painting with Light software exhibited at Kinetica Art Fair 2014

10. Fascinate, Evaluate and Praxis

My last post, 9. Improvisation Workshop – Two Performers, was over two months ago in early August, this post, at the end of October is a brief status update.

At the end of August I attended the Fascinate conference in Falmouth where I presented a paper and ran a small workshop on iMorphia. The production of the paper served as a process of reflection and re-evaluation. The possible directions for the development of iMorphia seemed to be continually expanding and it was becoming clear I needed to try and draw the threads together and focus. I was also becoming highly aware of the increasing tension between art making and PhD thesis production. As an artist  I wanted to use iMorphia to make a piece of work, a production a live performance, something that would be surreal and uncanny. My supervisors called for focus and rigour and the production of a paper that might be presented at HCI conferences and also be of interest to the performing arts.

Early on in my proposal I had expressed the desire to use practise as a main component and had found resonance with the Manifesto for Performative Research (Haseman 2006).

How might Art as Mode of Inquiry, a method I had used in my previous research in Computer Related Design at the RCA  (1995-2001) be used as a basis for PhD Research?

Serendipitously I discovered Practise As Research in the Arts (Robin Nelson 2013) and a Kindle version enabled me to make copious notes. This was the method I wished to employ. It is highly recommended reading for any would-be Practise as Researchers, especially those, like me, based in the more traditional positivist, science and empirical branches of academia. After reading this I felt enlightened and equipped with methods and evidence to create a new direction of melding practise with theory – praxis.

The next task was to  find an appropriate research field where my work might be located, a territory that might offer similar examples of practise and associated theories that I might use to synthesise my praxis.

I began reading the thesis’s of other practitioner researchers, references taking me into Critical Theory, the fuzzy words and worlds of Continental Philosophy (Derrida et al), Object Orientated Ontology and Speculative Realism, Phenomenology and  Machinic Philosophies.

One area  rich area with exemplars of practise performance and technology is that of Intermediality; though also rather sprawling and contested, I found resonance with other practitioners and inspiring examples of practise. One example in particular being the work of Jo Scott, someone I had referred to very earlier on in my early PhD proposal.

Jo Scott is an artist and research student investigating “New Forms of Liveness in Live Intermedial Performance”

“This practice-as-research project addresses liveness in performance and investigates its construction and manifestation, specifically within an intermedial context. Taking as its starting point definitions of liveness posited by Peggy Phelan, Philip Auslander and Erika Fischer-Lichte, the focus of the research is to interrogate such definitions through practice.” (Jo Scott 2013)

In her paper, “Dispersed and Dislocated: The construction of liveness in live intermedial performance”, Scott (2012) discusses how her practice based research informs theories on liveness and intermediality, arguing that two essential elements to creating a sense of liveness is the real-time nature of the technology enhanced performance unfolding in space and time and the unpredictability of its direction at any moment in time.”

In her writing Jo made reference to “Intermediality in Theatre and Performance” (Chapple, Freda and Kattenbelt, Chiel 2006)  though unable to find a copy of the book, an updated online version Mapping Intermediality in Performance was available and here I found a resource full of writings and examples that produced further resonance with my own rather foggy directions of where I wanted to head in my research in ‘multi-modal performance and technology’.

Rather than the traditional approach of producing a written thesis, PaR advocates a PhD submission which includes practise (live performance), documentary evidence of practise (DVD) and “complementary writing”. In addition notes, copies of journals and a website such as this are all recognised as evidence of research.

Rigour not only extends to knowledge of the field (writing, practise) but also evidence of criticality and self awareness of ones own approach to the research, especially making the assumed or hidden approaches of the artist (tacit knowledge) visible and explicit.

I am currently working on my ‘complementary writing’ with references and acknowledgements to practises and theory. How might iMorphia be informed and refined through the discourse of intermediality? Are there  other discourses that I might use? What form might the new practise take? What theories might be imbricated to produce praxis?

Where next? What next?  How?
Theory and Practise = Praxis.

9. Evaluation Exercise – Two Performers

This enactment sought to evaluate whether two performers transformed at the same time might encourage improvisation.

The exercise was carried out in a performance space off site, which acted as a means of determining the portability of the system and also enabled a black backdrop to be tested as an alternative to the previous white projection surfaces.

The video below illustrates the two performers playfully improvising verbally whilst in opposite gender, alternative less idealised body types against a black back drop and physical dance-like improvisation.

Early observations suggest that enabling two transformed performers to appear on stage at the same time does not immediately result in improvisation. Perhaps this is unsurprising, placing two performers unfamiliar with improvisation on a stage without a script for them to work with or a scenario designed to encourage improvisation is likely to produced the same results.

Conversation about why there was a lack of immediate improvisation gave rise to a number of suggestions including the idea that the addition of a third element for the performers to work with would give the performers something to work with and  encourage improvisation. The third element could take on a number of forms, the entry of a virtual character or perhaps a virtual object that the performers could pass to each other. We all felt that a game like scenario, the throwing of a virtual (or real) ball for instance would immediately encourage play and improvisation.

There are a variety of techniques and games designed to encourage improvisation, many of these can be found on the website Impro Encyclopedia. These techniques could be used as a basis for creating improvisational interactive scenarios using the iMorphia platform and adapted to exploit the power of virtual scenography and the interactive gaming potential inherent in the Unity Games Engine.

In order to explore the potential of interactive improvisational scenarios and game like performances it is envisaged that the next stage of the research will investigate the addition of interactive objects able to respond to the virtual projected iMorphia characters.

7. Performance and Games Workshop


A two day collaborative workshop exploring performance, the Kinect, and movement based games took place at Lincoln University on 25th/26th March 2014 . The event was organised by Dr Patrick Dickinson  and hosted by the Performance and Games Network .

The first day consisted of talks by:

Ida Toft and Sabine Harrer of the Copenhagen Game Collective;
Nick Burton, New Technology Lead of Rare Gaming;
David Renton, Microsoft MVP
(Kinect for Windows technical communities)
and Matt Watkins of Mudlark.

This was followed by group discussions in preparation for the collaborative “hack” day exploring five themes:

  1. Interfaces for Performance (Leaders: Duncan Rowland, Kate Sicchio)
  2. Mobility Impaired Performance (Leader: Kathrin Gerling)
  3. Physical Games in Playgrounds (Leader: Grethe Mitchell)
  4. Performative interfaces to seed social encounters (Leader: John Shearer)
  5. Audience and Movement Games (Leader: Patrick Dickinson)

I joined the Interfaces for Performance group where we had a lively group discussion on notions of interface, HCI, Human Human interfaces with the idea of creating challenging, embarrassing and awkward interactive acts and interfaces. (inspired by Sabine Harrer and her work on awkward games)

The large group spilt into sub groups to develop individual and group sub projects. I worked with artist/performer/dancer Ruth Gibson of Igloo exploring the idea of a motion capture (Cinema Mocap) as a tool for improvised performance.

ruth lincoln

Playing on the idea of awkwardness, the hack demo was conceptualised as a game where two or more people would record a short awkward, challenging or embarrassing performance for the second person to try and copy or improvise.

Ruth’s initial performance involved rapid and complex movements and challenged the ability of the mocap system to record correctly, resulting in distorted limbs and inhuman movements. The glitches however inspired Ruth to produce a motion capture of an inhuman looking movement:

In the discussion after the demo it was suggested that the prototype resembled a motion capture version of the game of Exquisite Corpse, leading to discussions of how it could be developed into a game with scoring and also find application in serious games such as dance training.

The ability of capturing and replaying motion within the Unity Games Engine offers scope for further performance experiments and scripting opportunities for the development of an improvisation or practise tool.

The following video illustrates how expressive actions can be captured and re-represented by a male and a female Unity character.

Further research will be to investigate  the difference between  possessing a unity character – where it copies you – to being possessed by it  – where you try and copy it.  A convolution like algorithm could be used to generate a ‘coherence value’ indicating the closeness of the  movements which could be used to give real time user feedback or generate a score. Generating real-time user feedback of the coherence value via colour or sound would result in the performer learning to copy and move in time with the movements of the character. Applications of coherence feedback might be in “serious games” such as  dance practise, sports exercise and taichi.

6. “User Testing”

Below are videos taken from a number of participants acting as an early form of “user testing”, an HCI term I am borrowing for purposes of illustration. Strictly speaking it is not classic user testing as no official ethnographic studies were carried out –  research questions were not formulated or posed, nor any user interviews or recorded user feedback carried out.  However as a form of open ended user feedback the “experiments” (another value laden term in classic research) proved useful and also underlined the value of exposing the system to more participants in the form of the forthcoming workshops.

Applying a form of auto-ethnographic analysis I observed that new participants highlighted the differences  between someone versed with using the system (myself) and its constraints such as tracking speed and coherence of body mapping.

New users pushed the limits of the system  and gave positive feedback on “glitches” I had tried to avoid – such as system mis-tracking resulting in a limb jumping out of place or characters contorting in an unrealistic fashion.

Verbal feedback of female participants puppeteering a male and a female character also proved interesting. One performer commented on the challenge she felt on becoming the surfer dude character –  visually judging them as the sort of person she would not want to talk to in every day life. This observation suggests a series of further tests and the creation of a range characters that people might feel uncomfortable with.

Another female participant commented on the feeling of alienation of appearing as a male, stating that she knew she was a woman and not a male so felt  a strong disconnection with the projected character, the same participant from her comments appeared to feel more disturbed when taking on the realistic female character in a bathing costume, and used the term uncanny without prompting. Such reactions might also be connected with “cognitive  dissonance”.  However if I wished to analyse peoples reactions to taking on differing projected genders from a psychological perspective I would need to bring in expert help.


5. Research Review: Theory and Practice

The recent practical experiments were motivated by the desire to create a transformational experience for a performer (or performers) and their audience using multi-modal technology (projection, live responsive computer generated characters and Kinect body sensing).

A research question might be “Can a projected responsive avatar produce a sense of the uncanny in a performer and/or an audience?”

Classic research requires that this hypothesis be tested and validated, typically through user testing, questions and analysis. Rather than simply testing a hypothesis my personal preference is to discover how other performers react to the system, how it might be further developed and whether it has any value. To this end it is planned that a number of workshops will be held in approximately 8-10 weeks time, after a series of questions and planned scenarios have been developed – a workshop structure.

Meanwhile I do feel that this approach has a limited trajectory, it is not difficult to envisage how a more stable and believable system might be developed, one can imagine scenarios and short scenes of how it might be used. If this were an arts project with an intended public audience then I would be focussing on improving the quality and interactive responses of the system, developing scripts and and developing believable and engaging content.

However this is research, and I am feeling unsure exactly of how to balance theory and practice. Further, I am not entirely clear as to what is an appropriate research methodology given that my work and approach sits somewhere uncomfortably between Art Practice and Computer Science.

My feeling is that the Unity/Kinect model has reached an end and that other techniques need to be explored. If this were purely an arts practice led PhD then I believe that this would be a valid and acceptable mode of enquiry, that new techniques need to be tried without resorting to establishing a hypothesis and then testing it to determine its validity. I refer back now to my research proposal where I examined various research methods especially the Performative Research Manifesto envisaged by Brad Haseman.

Taking its name from J.L. Austin’s speech act theory, performative research stands as an alternative to the qualitative and quantitative paradigms by insisting on different approaches to designing, conducting and reporting research. The paper concludes by observing that once understood and fully theorised, the performative research paradigm will have applications beyond the arts and across the creative and cultural industries generally.

(Haseman 2006).


Two new interactive practise driven methodologies I wish to explore are:

1. The use of Augmented Reality as a means of creating invisible props that can respond to the performer.

2. The exploration of the performative use of Virtual Human technologies being developed by the Virtual Humans Research group at the Institute for Creative Technologies USC.

These two methodologies encompass two very different underlying forms of improvisation and interaction – the first seeks to create a space for improvisation, the unexpected and magic, relying more on performer improvisation than system improvisation. The second methodology places more emphasis on system improvisation, where characters have more “life” and the performer has to respond or interact with independent agent based entities.

In order to establish the feasibility of whether a Virtual Human might be used in a performative context I have downloaded the Virtual Human Toolkit which integrates with the Unity framework. The toolkit appears to offer many unusual and interesting capabilities, voice recognition, gaze awareness and the creation of scripts to define responses to user interactions.

4. Kinect and Unity – Semi-realistic characters

The previous post dealt with the generation and acquisition of more realistic human characters suitable for importing into the Unity games engine and controllable by performers via the Kinect plugin. This post features four video demonstrations of the results.

1. Live character projection mapping
Unity asset, demonstrating a character attempting to follow the poses and walking movements of the performer, with variable degrees of success.


2. Live MakeHuman character projection mapping
The character is exported from MakeHuman as a Collada (.dae) asset suitable for importing into Unity. The character exhibits a greater degree of realism and may at times be perceived as being somewhat uncanny. The behaviour of the character is limited due to its inability to move horizontally with the performer.


3. Live DAZ character projection mapping
The imported semi-realistic human character is a free asset included with the DAZ software, the eyes are incorrectly rendered but this accidentally produces a somewhat eerie effect. The character can be seen to follow the movements of the performer with reasonable coherence, glitches appear when the performer moves too close to the back wall and the Kinect then becomes incapable of tracking the performer correctly.


4. Live two character projection mapping
This video is perhaps one of the more interesting in that watching two characters appears to be more interesting and engaging than watching one. We tend to read into the video as if the characters are interacting with each other and having a dialogue. One might imagine they are a couple arguing over something, when in fact the two performers were simply testing the interaction of the system, moving back and forth and independently moving their arms without attempting to convey any meaningful interaction or dialogue.

3. Realism and The Uncanny

One of the criticisms of the original MikuMorphia was that because it looked like a cartoon, it was far from the uncanny. The  Uncanny Valley of Computer Graphics appears to be located somewhere between human and inhuman realism – where a 3D render makes the viewer feel uncomfortable because it is almost convincing as a mimetic version of a human, but something feels or appears not quite right. In order to explore this in-between region I therefore had to look towards acquiring or producing more realistic models of humans.


Unity and 3d human-like models

The chosen 3d graphics engine is Unity, and fortunately it is able to import a variety of standard 3d models in a number of formats. Rather than attempting to create a model of a human from scratch I investigated the downloading of free models from a variety of sources including Turbosquid and TF3DM. Many of these models exhibited a reasonable amount of realism, however for the model to work with the Kinect they also need to possess a compatible armature so that the limbs move in response to the actor.


  Character illustrating internal armature

Rigging a 3d model with an armature such that the outer skin moves in a realistic manner is a non trivial task, requiring skills and the use of complex 3d modelling tools such as Maya, 3d StudioMax or Blender. I had high hopes for the automated rigging and provided by the online software Mixamo, however the rigging generated proved incompatible with the Kinect requirements.

The astounding Open Source software package MakeHuman enables the generation of an infinite range of human like figures, with sliders to adjust weight, age, sex, skin colouring and enable the subtle alteration of limb lengths and facial features.


This package offers a refreshing alternative to the endless fantasy and game like characters prevalent in human character models on the internet. The generated armature is almost compatible with the kinect requirements such that a figure can mimic the movement of the actor, but due to a lack of one vital element (the root node), the actor has to remain in one position as the virtual character is unable to follow the movement of the actor if they move left or right. I will be investigating the feasibility of adding this additional root node via one of the aforementioned 3d modelling tools.

DAZ3d studio, a free character modelling package does successfully generate characters that possess the correct armature, although the software is free, the characters, clothing and accessories such as hair are all chargeable. However rather than attempting to model characters from scratch this software  with its vast library provides a painless and efficient method of generating semi-realistic human-like characters.


Critical Note

I was somewhat surprised by the amount of what can only be termed soft porn available in the form of female 3d models in provocative poses, naked or wearing skimpy clothing; suggesting a strange predominately male user base using the software to create animations and fantasies of a form that objectify women in an adolescent and very non PC manner.

1391327598_fxkht48jrplbvqmA google image search of DAZ3d results in a disturbing overview of the genre.