I was invited to present iMorphia at the Festival of Fools event on April 1st 2017, hosted by the Nottingham Dilettante Society and the theatre group NonSuch.
This was a great opportunity to create a range of alternative foolish characters and update iMorphia so people could choose a character to inhabit, or be given one at random.
Around twenty visitors experienced iMorphia, changing gender and body type, going anime or naked with body tattoos, play acting, dancing, laughing and having fun being foolish.
Visitors don a white boiler suit, wear video glasses and facing the projector and body tracking Kinect, a virtual character is projected onto their body.
“I’d describe the iMorphia experience as fantastically discombobulating.”
(Sophie Gargett, Dilettante Society)
Visitors inhabiting a range of projected virtual characters are shown in the video below.
By pressing a mouse button, the new version enabled the instant changing of a character between male and female, plus the addition or not of tattoos. Rather than costuming the characters, I chose to create naked characters using the latest edition of MakeHuman. The idea of a person donning a white boiler suit over their clothes then appearing virtually naked I felt added an element of risk and surreal drama to the occasion.
Five visitors to the exhibition chose to experience iMorphia whilst a small audience watched the proceedings. Positive feedback from the participants and audience confirmed the effectiveness of the illusion in producing a strange and disturbing unworldly ghost like character. One person commenting that from a distance they thought they were watching a film, until they came closer and were surprised in realising the character was being projected onto and controlled in real time by a performer.
Recorded footage of iMorphia once again demonstrated how participants creatively exploited glitches produced by Kinect tracking errors. Laughter resulting from one of the participants breaking the tracking entirely by squatting down:
On the 26th and 27th May I carried out two workshops designed to compare improvisation and performative engagement between the two intermedial stages of PopUpPlay and iMorphia. The performers had previously participated in the last two workshops so were familiar with iMorphia, but had not worked with PopUpPlay before.
My sense that PopUpPlay would provoke improvisation as outlined in the previous post, proved correct, and that iMorphia in its current form is a constrained environment with little scope for improvisation.
The last workshop tested out whether having two performers transformed at the same time might encourage improvisation. We found this was not the case and that a third element or some sort of improvisational structure was required. The latest version of iMorphia features a backdrop and a virtual ball embodied with physics which interacts with the feet and hands of the two projected characters. This resulted in some game playing between the performers, but facilitated a limited and constrained form of improvisation centred around a game. The difference between game and play and the implications for the future development of iMorphia are outlined at the end of this post.
In contrast, PopUpPlay, though requiring myself as operator of the system, resulted in a great deal of improvisation and play as exemplified in the video below.
OBSERVATIONS
1. Mirroring
The first workshop highlighted the confusion between left and right arms and feet when a performer attempted to either kick a virtual ball or reach out to a virtual object. This confusion had been noted in previous studies and is due to the unfamiliar third person perspective relayed to the video glasses from the video camera located in the position of an audience member.
Generally the only time we see ourselves is in a mirror and as a result have become trained to accepting seeing ourselves horizontally reversed in the mirror. In the second workshop I positioned a mirror in front of the camera at 45 degrees so as to produce a mirror image of the stage in the video glasses.
I tested the effect using the iMorphia system and was surprised how comfortable and familiar the mirrored video feedback felt and had no problems working out left from right and interacting with the virtual objects on the intermedial stage. This effectiveness of the mirrored feedback was also confirmed by the two participants in the second workshop.
2. Gaming and playing The video highlights how PopUpPlay successfully facilitated improvisation and play, whilst iMorphia, despite the adding of responsive seagulls to the ball playing beach scene, resulted in a constrained game-like environment, where performers simply played a ball passing game with each other. Another factor to be recognised is the role of the operator in PopUpPlay, where I acted as a ‘Wizard of Oz’ behind the scenes director, controlling and influencing the improvisation through the choice of the virtual objects and their on-screen manipulation. My ideal would be to make such events automatic and embody these interaction within iMorphia.
We discussed the differences between iMorphia and PopUpPlay and also the role of the audience, how might improvisation on the intermedial stage work from the perspective of an audience? How might iMorphia or PopUp Play be extended so as to engage both performer and audience?
All the performers felt that there were times when they wanted to be able to move into the virtual scenery, to walk down the path of the projected forest and to be able to navigate the space more fully. We felt that the performer should become more like a shamanistic guide, able to break through the invisible walls of the virtual space, to open doors, to choose where they go, to perform the role of an improvisational storyteller, and to act as a guide for the watching audience.
The vision was that of a free open interactive space, the type of spaces present in modern gaming worlds, where players are free to explore where they go in large open environments. Rather than a gaming trope, the worlds would be designed to encourage performative play rather than follow typical gaming motifs of winning, battling, scoring and so on. The computer game “Myst” (1993) was mentioned as an example of the type of game that embodied a more gentle, narrative, evocative and exploratory form of gaming.
3. Depth and Interaction
The above ideas though rich with creative possibilities highlight some of the technical and interactive challenges when combining real bodies on a three dimensional stage with a virtual two dimensional projection. PopUpPlay utilises two dimensional backdrops and the movements of the virtual objects are constrained to two dimensions – although the illusion of distance can be evoked by changing the size of the objects. IMorphia on the other hand is a simulated three dimensional space. The interactive ball highlighted interaction and feedback issues associated with the z or depth dimension. For a participant to kick the ball their foot had to be colocated near to the ball in all three dimensions. As the ball rested on the ground the y dimension was not problematic, the x dimension, left and right, was easy to find, however depth within the virtual z dimension proved very difficult to ascertain, with performers having to physically move forwards and backwards in order to try and move the virtual body in line with the ball. The video glasses do not provide any depth cues of the performer in real or virtual space, and if performers are to be able to move three dimensionally in both the real and the virtual spaces in such a way that colocation and thereby real/virtual body/object interactions can occur, then a method for delivering both virtual and real world depth information will be required.
On Thursday 26th February 2015 I attended the launch of Pop Up Play at De Montford University, a free “Open Source” mixed reality toolkit for schools.
The experience of PopUpPlay was described as a hybrid mix of theatre, film, game and playground.
It was extremely refreshing and inspiring to witness the presentation of the project and experience a live hands-on demonstration of the toolkit.
The presentation included case studies with videos showing how children used the system and feedback from teachers and workshop leaders on its power and effectiveness.
Feedback from the trials indicated how easily and rapidly children took to the technology, mastering the controls and creating content for the system.
What was especially interesting in the light of iMorphia was the open framework and inherent intermedial capabilities presented by the system. A simple interface enabled the control of background images, webcam image input and kinect 3D body sensing, as well as control of DMX lights and the inclusion of audio and special effects .
The system also supported a wireless iPad tablet presenting a simplified and robust control interface designed for children, rather than the more feature rich computer interface. The touchable interface also enabled modification of images through familiar touch screen gestures such as pinch, expand rotate and slide.
“The overarching aims of this research project were to understand how Arts and cultural organisations can access digital technology for creative play and learning, and how we can enable children and young people to access meaningful digital realm engagement.
In response to this our specific objectives were to create a mixed reality play system and support package that could:
Immerse participants in projected images and worlds
Enable children to invest in the imaginary dimensions and possibilities of digital play
provide a creative learning framework, tools, guides and manuals and an online community
Offer open source software, easy to use for artists, learning officers, teachers, librarians, children and young people”
Two interesting observations drawn by the research team from the case studies were the role playing of the participants and the design of a set of ideation cards to help stimulate creative play.
Participants tended to adopt the roles of Technologist, Director, Player, Constructor and Observer. Though they might also swap or take on multiple roles throughout the experience.
The ideation cards supplied suggestions for activities or actions based on four categories
Change, Connect, Create and Challenge.
Change – change a parameter in the system.
Connect – carry out an action that makes connections in the scene.
Create – create something to be used in the scene.
Challenge – a new task to be carried out.
An interesting observation was that scenes generally did not last more than 3 minutes before the children became bored and something was required to change the scene in some way, hence the use of the ideation cards.
The use of ideation cards as a means of shaping or catalysing performative practise echoes one of the problems Jo Scott mentioned when a system is too open, that there would be nowhere to go and some shaping or steering mechanism was required.
A number of audience members commented on the lack of narrative structure, though the team felt that children were quite happy to make it up as they went along and the system embodied a new ontology, an iterative process moving between moment to moment which represented a new practise within creative play.
Through the Looking Glass
One of the weaknesses of the system I felt was the television screen aspect where participants watched the mixed reality on a screen in front of themselves, as if looking upon a digital mirror, which tended to cause a breakdown of the immersive effect when participants looked at each other. I felt there were problems with this approach and one of the interesting aspects of iMorphia was the removal of the watched screen, instead one watched oneself from the perspective of the audience. It would be interesting to combine Pop Up Play with the third person viewing technique utilised in iMorphia.
The lack of support for improvisation within iMorphia could be potentially addressed by the Pop Up Play interface. Though the system enables individual elements to be loaded at any time it does not currently support a structure that would enable scenes or narrative structures to be created or recalled, nor transitions between scenes to be created in the form of a trajectory. Though advertised as OpenSource, the actual system is implemented in MaxMSP which would require a license to be able to modify or add to the software.
Though very inspiring, I was viewing the system from the perspective of questioning how it might be used in live performance. Apart from the need for a hyper structure to enable the recall of scenes another problematic aspect was the need for the subject to be brightly illuminated by a very bright white LED lamp. This is a problem I also encountered when testing out face tracking, it would only work when the face was sufficiently illuminated. The Kinect webcam requires sufficient illumination to be able to “see”, unlike its inbuilt infra-red 3D tracking capability. This need for lighting then clashes with the projectors requirement of a near dark environment. Perhaps infra-red illumination or a “nightvision” low lux webcam might solve this problem.
This enactment sought to evaluate whether two performers transformed at the same time might encourage improvisation.
The exercise was carried out in a performance space off site, which acted as a means of determining the portability of the system and also enabled a black backdrop to be tested as an alternative to the previous white projection surfaces.
The video below illustrates the two performers playfully improvising verbally whilst in opposite gender, alternative less idealised body types against a black back drop and physical dance-like improvisation.
Early observations suggest that enabling two transformed performers to appear on stage at the same time does not immediately result in improvisation. Perhaps this is unsurprising, placing two performers unfamiliar with improvisation on a stage without a script for them to work with or a scenario designed to encourage improvisation is likely to produced the same results.
Conversation about why there was a lack of immediate improvisation gave rise to a number of suggestions including the idea that the addition of a third element for the performers to work with would give the performers something to work with and encourage improvisation. The third element could take on a number of forms, the entry of a virtual character or perhaps a virtual object that the performers could pass to each other. We all felt that a game like scenario, the throwing of a virtual (or real) ball for instance would immediately encourage play and improvisation.
There are a variety of techniques and games designed to encourage improvisation, many of these can be found on the website Impro Encyclopedia. These techniques could be used as a basis for creating improvisational interactive scenarios using the iMorphia platform and adapted to exploit the power of virtual scenography and the interactive gaming potential inherent in the Unity Games Engine.
In order to explore the potential of interactive improvisational scenarios and game like performances it is envisaged that the next stage of the research will investigate the addition of interactive objects able to respond to the virtual projected iMorphia characters.
In order to evaluate the effectiveness and to gain critical feedback of ‘iMorphia’ the prototype performance system, fourteen performers took part in a series of workshops which were carried out between the 14th and 18th April 2014 in the Mixed Reality Lab at Nottingham University.
One of the key observations was that content effects performative behaviour. This was originally posed as a research question in October 2013:
“Can the projected illusion affect the actor such that they feel embodied by the characteristics of the virtual character? ”
An interesting observation was the powerful and often liberating effect of changing the gender of male and female participants, producing comments such as “I feel quite powerful like this” (f->m), “I feel more sensual” (m->f).
All participants when in opposite gender expressed awareness of stereotypes, males not wanting to behave in what they perceived as a stereotypical fashion towards the female character, whilst females in male character seemed to relish the idea of playing with male stereotypes. These reactions reflect a contemporary post feminism society where the act of stereotyping females has strong political issues. A number of males reported how they felt that they had to respect the female character as if it had an independent life.
One participant likened the effect of changing gender to the medieval ‘Festival of Fools’, where putting on clothes of the opposite gender is a foolish thing to do and gives permission to play the fool and to break rules, which was once regarded as a powerful and liberating thing to be able to do. This sentiment was echoed by a number of participants, that the system gave you freedom and permission to be other, other than ones normal everyday self and removed from people’s expectations of how one is supposed to behave.
In summary the key observations resulting from the workshops were:
i) The effectiveness of body projection in creating an embodied character that is sufficiently convincing and effective in creating a suspension of disbelief in both performer and audience.
ii) How system artefacts such as lag and glitches from tracking errors were exploited by performers to explore notions of the double and the uncanny.
iii) The affective response of the performer when in character compared to the objective response when viewing the projection as an audience member.
The video below contains short extracts from the four hours of recorded video, with text overlays of comments by the performers.
A two day collaborative workshop exploring performance, the Kinect, and movement based games took place at Lincoln University on 25th/26th March 2014 . The event was organised by Dr Patrick Dickinson and hosted by the Performance and Games Network .
I joined the Interfaces for Performance group where we had a lively group discussion on notions of interface, HCI, Human Human interfaces with the idea of creating challenging, embarrassing and awkward interactive acts and interfaces. (inspired by Sabine Harrer and her work on awkward games)
The large group spilt into sub groups to develop individual and group sub projects. I worked with artist/performer/dancer Ruth Gibson of Igloo exploring the idea of a motion capture (Cinema Mocap) as a tool for improvised performance.
Playing on the idea of awkwardness, the hack demo was conceptualised as a game where two or more people would record a short awkward, challenging or embarrassing performance for the second person to try and copy or improvise.
Ruth’s initial performance involved rapid and complex movements and challenged the ability of the mocap system to record correctly, resulting in distorted limbs and inhuman movements. The glitches however inspired Ruth to produce a motion capture of an inhuman looking movement:
In the discussion after the demo it was suggested that the prototype resembled a motion capture version of the game of Exquisite Corpse, leading to discussions of how it could be developed into a game with scoring and also find application in serious games such as dance training.
Conclusions
The ability of capturing and replaying motion within the Unity Games Engine offers scope for further performance experiments and scripting opportunities for the development of an improvisation or practise tool.
The following video illustrates how expressive actions can be captured and re-represented by a male and a female Unity character.
Further research will be to investigate the difference between possessing a unity character – where it copies you – to being possessed by it – where you try and copy it. A convolution like algorithm could be used to generate a ‘coherence value’ indicating the closeness of the movements which could be used to give real time user feedback or generate a score. Generating real-time user feedback of the coherence value via colour or sound would result in the performer learning to copy and move in time with the movements of the character. Applications of coherence feedback might be in “serious games” such as dance practise, sports exercise and taichi.
Performative Interaction and Embodiment on an Augmented Stage