Tag Archives: two performers

14. Comparative Study

On the 26th and 27th May I carried out two workshops designed to compare improvisation and performative engagement between the two intermedial stages of PopUpPlay and iMorphia. The performers had previously participated in the last two workshops so were familiar with iMorphia, but had not worked with PopUpPlay before.

My sense that PopUpPlay would provoke improvisation as outlined in the previous post, proved correct, and that iMorphia in its current form is a constrained environment with little scope for improvisation.

The last workshop tested out whether having two performers transformed at the same time might encourage improvisation. We found this was not the case and that a third element or some sort of improvisational structure was required. The latest version of iMorphia features a backdrop and a virtual ball embodied with physics which interacts with the feet and hands of the two projected characters. This resulted in some game playing between the performers, but facilitated a limited and constrained form of improvisation centred around a game. The  difference between game and play and the implications for the future development of iMorphia are outlined at the end of this post.

In contrast, PopUpPlay, though requiring myself as operator of the system, resulted in a great deal of improvisation and play as exemplified in the video below.

OBSERVATIONS

1. Mirroring
The first workshop highlighted the confusion between left and right arms and feet when a performer attempted to either kick a virtual ball or reach out to a virtual object. This confusion had been noted in previous studies and is due to the unfamiliar third person perspective relayed to the video glasses from the video camera located in the position of an audience member.

Generally the only time we see ourselves is in a mirror and as a result have become trained to accepting seeing ourselves horizontally reversed in the mirror. In the second workshop I positioned a mirror in front of the camera at 45 degrees so as to produce a mirror image of the stage in the video glasses.

cammirror

I tested the effect using the iMorphia system and was surprised how comfortable and familiar the mirrored video feedback felt and had no problems working out left from right and interacting with the virtual objects on the intermedial stage. This effectiveness of the mirrored feedback was also confirmed by the two participants in the second workshop.

 2. Gaming and playing
The video highlights how PopUpPlay  successfully facilitated improvisation and play, whilst iMorphia, despite the adding of responsive seagulls to the ball playing beach scene, resulted in a constrained game-like environment, where performers simply played a ball passing game with each other. Another factor to be recognised is the role of the operator in PopUpPlay, where I acted as a ‘Wizard of Oz’ behind the scenes director, controlling and influencing the improvisation through the choice of the virtual objects and their on-screen manipulation. My ideal would be to make such events automatic and embody these interaction within iMorphia.

We discussed the differences between iMorphia and PopUpPlay and also the role of the audience, how might improvisation on the intermedial stage work from the perspective of an audience? How might iMorphia or PopUp Play be extended so as to engage both performer and audience?

All the performers felt that there were times when they wanted to be able to move into the virtual scenery, to walk down the path of the projected forest and to be able to navigate the space more fully. We felt that the performer should become more like a shamanistic guide, able to break through the invisible walls of the virtual space, to open doors, to choose where they go, to perform the role of an improvisational storyteller, and to act as a guide for the watching audience.

The vision was that of a free open interactive space, the type of spaces present in modern gaming worlds, where players are free to explore where they go in large open environments. Rather than a gaming trope, the worlds would be designed to encourage performative play rather than follow typical gaming motifs of winning, battling, scoring and so on. The computer game “Myst” (1993) was mentioned as an example of the type of game that embodied a more gentle, narrative, evocative  and exploratory form of gaming.

3. Depth and Interaction
The above ideas though rich with creative possibilities highlight some of the technical and interactive challenges when combining real bodies on a three dimensional stage with a virtual two dimensional projection. PopUpPlay utilises two dimensional backdrops and the movements of the virtual objects are constrained to two dimensions – although the illusion of distance can be evoked by changing the size of  the objects. IMorphia on the other hand is a simulated three dimensional space. The interactive ball highlighted interaction and feedback issues associated with the z or depth dimension. For a participant to kick the ball their foot had to be colocated near to the ball in all three dimensions. As the ball rested on the ground the y dimension was not problematic, the x dimension, left and right, was easy to find, however depth within the virtual z dimension proved very difficult to ascertain, with performers having to physically move forwards and backwards in order to try and move the virtual body in line with the ball. The video glasses do not provide any depth cues of the performer in real or virtual space, and if performers are to be able to move three dimensionally in both the real and the virtual spaces in such a way that colocation and thereby real/virtual body/object interactions can occur, then a method for delivering both virtual and real world depth information will be required.

 

9. Evaluation Exercise – Two Performers

This enactment sought to evaluate whether two performers transformed at the same time might encourage improvisation.

The exercise was carried out in a performance space off site, which acted as a means of determining the portability of the system and also enabled a black backdrop to be tested as an alternative to the previous white projection surfaces.

The video below illustrates the two performers playfully improvising verbally whilst in opposite gender, alternative less idealised body types against a black back drop and physical dance-like improvisation.

Early observations suggest that enabling two transformed performers to appear on stage at the same time does not immediately result in improvisation. Perhaps this is unsurprising, placing two performers unfamiliar with improvisation on a stage without a script for them to work with or a scenario designed to encourage improvisation is likely to produced the same results.

Conversation about why there was a lack of immediate improvisation gave rise to a number of suggestions including the idea that the addition of a third element for the performers to work with would give the performers something to work with and  encourage improvisation. The third element could take on a number of forms, the entry of a virtual character or perhaps a virtual object that the performers could pass to each other. We all felt that a game like scenario, the throwing of a virtual (or real) ball for instance would immediately encourage play and improvisation.

There are a variety of techniques and games designed to encourage improvisation, many of these can be found on the website Impro Encyclopedia. These techniques could be used as a basis for creating improvisational interactive scenarios using the iMorphia platform and adapted to exploit the power of virtual scenography and the interactive gaming potential inherent in the Unity Games Engine.

In order to explore the potential of interactive improvisational scenarios and game like performances it is envisaged that the next stage of the research will investigate the addition of interactive objects able to respond to the virtual projected iMorphia characters.

4. Kinect and Unity – Semi-realistic characters

The previous post dealt with the generation and acquisition of more realistic human characters suitable for importing into the Unity games engine and controllable by performers via the Kinect plugin. This post features four video demonstrations of the results.

1. Live character projection mapping
Unity asset, demonstrating a character attempting to follow the poses and walking movements of the performer, with variable degrees of success.

 

2. Live MakeHuman character projection mapping
The character is exported from MakeHuman as a Collada (.dae) asset suitable for importing into Unity. The character exhibits a greater degree of realism and may at times be perceived as being somewhat uncanny. The behaviour of the character is limited due to its inability to move horizontally with the performer.

 

3. Live DAZ character projection mapping
The imported semi-realistic human character is a free asset included with the DAZ software, the eyes are incorrectly rendered but this accidentally produces a somewhat eerie effect. The character can be seen to follow the movements of the performer with reasonable coherence, glitches appear when the performer moves too close to the back wall and the Kinect then becomes incapable of tracking the performer correctly.

 

4. Live two character projection mapping
This video is perhaps one of the more interesting in that watching two characters appears to be more interesting and engaging than watching one. We tend to read into the video as if the characters are interacting with each other and having a dialogue. One might imagine they are a couple arguing over something, when in fact the two performers were simply testing the interaction of the system, moving back and forth and independently moving their arms without attempting to convey any meaningful interaction or dialogue.