By pressing a mouse button, the new version enabled the instant changing of a character between male and female, plus the addition or not of tattoos. Rather than costuming the characters, I chose to create naked characters using the latest edition of MakeHuman. The idea of a person donning a white boiler suit over their clothes then appearing virtually naked I felt added an element of risk and surreal drama to the occasion.
Five visitors to the exhibition chose to experience iMorphia whilst a small audience watched the proceedings. Positive feedback from the participants and audience confirmed the effectiveness of the illusion in producing a strange and disturbing unworldly ghost like character. One person commenting that from a distance they thought they were watching a film, until they came closer and were surprised in realising the character was being projected onto and controlled in real time by a performer.
Recorded footage of iMorphia once again demonstrated how participants improvised around glitches produced by Kinect tracking errors. Laughter resulting from one of the participants breaking the tracking entirely by squatting down:
The video documentation below illustrates an enactment of iMorphia with props imbued with physics. The addition of rigid body colliders and physical materials to the props and the limbs of the avatar enables Unity to simulate in real time the physical collision of objects and the effects of gravity, weight and friction.
The physics simulation adds a degree of believability to the scene, as the character attempts to interact with the book and chair. The difficulty of control in attempting to make the character interact with the virtual props is evident, resulting in a somewhat comic effect as objects are accidentally knocked over.
Interaction with the physics imbued props produced unpredictable responses to performance participation, resulting in a dialogue between the virtual props and the performer and a degree of improvisation – for example arms raised in frustration and the kicking over of the chair. These participatory responses suggest that physics imbued props produce a greater sense of engagement through enhancing the suspension of disbelief – the virtual props appear more believable and realistic than those that not imbued with physics.
This enactment once again highlights the problem of co-location between the performer, the projected character and the virtual props. Co-location issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception. There are also navigational problems resulting from an incongruity between the mapping of the position of the performers body and limbs in real space and those of the virtual characters avatar in virtual space.
On the 26th and 27th May I carried out two workshops designed to compare improvisation and performative engagement between the two intermedial stages of PopUpPlay and iMorphia. The performers had previously participated in the last two workshops so were familiar with iMorphia, but had not worked with PopUpPlay before.
My sense that PopUpPlay would provoke improvisation as outlined in the previous post, proved correct, and that iMorphia in its current form is a constrained environment with little scope for improvisation.
The last workshop tested out whether having two performers transformed at the same time might encourage improvisation. We found this was not the case and that a third element or some sort of improvisational structure was required. The latest version of iMorphia features a backdrop and a virtual ball embodied with physics which interacts with the feet and hands of the two projected characters. This resulted in some game playing between the performers, but facilitated a limited and constrained form of improvisation centred around a game. The difference between game and play and the implications for the future development of iMorphia are outlined at the end of this post.
In contrast, PopUpPlay, though requiring myself as operator of the system, resulted in a great deal of improvisation and play as exemplified in the video below.
The first workshop highlighted the confusion between left and right arms and feet when a performer attempted to either kick a virtual ball or reach out to a virtual object. This confusion had been noted in previous studies and is due to the unfamiliar third person perspective relayed to the video glasses from the video camera located in the position of an audience member.
Generally the only time we see ourselves is in a mirror and as a result have become trained to accepting seeing ourselves horizontally reversed in the mirror. In the second workshop I positioned a mirror in front of the camera at 45 degrees so as to produce a mirror image of the stage in the video glasses.
I tested the effect using the iMorphia system and was surprised how comfortable and familiar the mirrored video feedback felt and had no problems working out left from right and interacting with the virtual objects on the intermedial stage. This effectiveness of the mirrored feedback was also confirmed by the two participants in the second workshop.
2. Gaming and playing The video highlights how PopUpPlay successfully facilitated improvisation and play, whilst iMorphia, despite the adding of responsive seagulls to the ball playing beach scene, resulted in a constrained game-like environment, where performers simply played a ball passing game with each other. Another factor to be recognised is the role of the operator in PopUpPlay, where I acted as a ‘Wizard of Oz’ behind the scenes director, controlling and influencing the improvisation through the choice of the virtual objects and their on-screen manipulation. My ideal would be to make such events automatic and embody these interaction within iMorphia.
We discussed the differences between iMorphia and PopUpPlay and also the role of the audience, how might improvisation on the intermedial stage work from the perspective of an audience? How might iMorphia or PopUp Play be extended so as to engage both performer and audience?
All the performers felt that there were times when they wanted to be able to move into the virtual scenery, to walk down the path of the projected forest and to be able to navigate the space more fully. We felt that the performer should become more like a shamanistic guide, able to break through the invisible walls of the virtual space, to open doors, to choose where they go, to perform the role of an improvisational storyteller, and to act as a guide for the watching audience.
The vision was that of a free open interactive space, the type of spaces present in modern gaming worlds, where players are free to explore where they go in large open environments. Rather than a gaming trope, the worlds would be designed to encourage performative play rather than follow typical gaming motifs of winning, battling, scoring and so on. The computer game “Myst” (1993) was mentioned as an example of the type of game that embodied a more gentle, narrative, evocative and exploratory form of gaming.
3. Depth and Interaction
The above ideas though rich with creative possibilities highlight some of the technical and interactive challenges when combining real bodies on a three dimensional stage with a virtual two dimensional projection. PopUpPlay utilises two dimensional backdrops and the movements of the virtual objects are constrained to two dimensions – although the illusion of distance can be evoked by changing the size of the objects. IMorphia on the other hand is a simulated three dimensional space. The interactive ball highlighted interaction and feedback issues associated with the z or depth dimension. For a participant to kick the ball their foot had to be co-located near to the ball in all three dimensions. As the ball rested on the ground the y dimension was not problematic, the x dimension, left and right, was easy to find, however depth within the virtual z dimension proved very difficult to ascertain, with performers having to physically move forwards and backwards in order to try and move the virtual body in line with the ball. The video glasses do not provide any depth cues of the performer in real or virtual space, and if performers are to be able to move three dimensionally in both the real and the virtual spaces in such a way that co-location and thereby real/virtual body/object interactions can occur, then a method for delivering both virtual and real world depth information will be required.
On Thursday 26th February 2015 I attended the launch of Pop Up Play at De Montford University, a free “Open Source” mixed reality toolkit for schools.
The experience of PopUpPlay was described as a hybrid mix of theatre, film, game and playground.
It was extremely refreshing and inspiring to witness the presentation of the project and experience a live hands-on demonstration of the toolkit.
The presentation included case studies with videos showing how children used the system and feedback from teachers and workshop leaders on its power and effectiveness.
Feedback from the trials indicated how easily and rapidly children took to the technology, mastering the controls and creating content for the system.
What was especially interesting in the light of iMorphia was the open framework and inherent intermedial capabilities presented by the system. A simple interface enabled the control of background images, webcam image input and kinect 3D body sensing, as well as control of DMX lights and the inclusion of audio and special effects .
The system also supported a wireless iPad tablet presenting a simplified and robust control interface designed for children, rather than the more feature rich computer interface. The touchable interface also enabled modification of images through familiar touch screen gestures such as pinch, expand rotate and slide.
“The overarching aims of this research project were to understand how Arts and cultural organisations can access digital technology for creative play and learning, and how we can enable children and young people to access meaningful digital realm engagement.
In response to this our specific objectives were to create a mixed reality play system and support package that could:
Immerse participants in projected images and worlds
Enable children to invest in the imaginary dimensions and possibilities of digital play
provide a creative learning framework, tools, guides and manuals and an online community
Offer open source software, easy to use for artists, learning officers, teachers, librarians, children and young people”
Two interesting observations drawn by the research team from the case studies were the role playing of the participants and the design of a set of ideation cards to help stimulate creative play.
Participants tended to adopt the roles of Technologist, Director, Player, Constructor and Observer. Though they might also swap or take on multiple roles throughout the experience.
The ideation cards supplied suggestions for activities or actions based on four categories
Change, Connect, Create and Challenge.
Change – change a parameter in the system.
Connect – carry out an action that makes connections in the scene.
Create – create something to be used in the scene.
Challenge – a new task to be carried out.
An interesting observation was that scenes generally did not last more than 3 minutes before the children became bored and something was required to change the scene in some way, hence the use of the ideation cards.
The use of ideation cards as a means of shaping or catalysing performative practise echoes one of the problems Jo Scott mentioned when a system is too open, that there would be nowhere to go and some shaping or steering mechanism was required.
A number of audience members commented on the lack of narrative structure, though the team felt that children were quite happy to make it up as they went along and the system embodied a new ontology, an iterative process moving between moment to moment which represented a new practise within creative play.
Through the Looking Glass
One of the weaknesses of the system I felt was the television screen aspect where participants watched the mixed reality on a screen in front of themselves, as if looking upon a digital mirror, which tended to cause a breakdown of the immersive effect when participants looked at each other. I felt there were problems with this approach and one of the interesting aspects of iMorphia was the removal of the watched screen, instead one watched oneself from the perspective of the audience. It would be interesting to combine Pop Up Play with the third person viewing technique utilised in iMorphia.
The lack of support for improvisation within iMorphia could be potentially addressed by the Pop Up Play interface. Though the system enables individual elements to be loaded at any time it does not currently support a structure that would enable scenes or narrative structures to be created or recalled, nor transitions between scenes to be created in the form of a trajectory. Though advertised as OpenSource, the actual system is implemented in MaxMSP which would require a license to be able to modify or add to the software.
Though very inspiring, I was viewing the system from the perspective of questioning how it might be used in live performance. Apart from the need for a hyper structure to enable the recall of scenes another problematic aspect was the need for the subject to be brightly illuminated by a very bright white LED lamp. This is a problem I also encountered when testing out face tracking, it would only work when the face was sufficiently illuminated. The Kinect webcam requires sufficient illumination to be able to “see”, unlike its inbuilt infra-red 3D tracking capability. This need for lighting then clashes with the projectors requirement of a near dark environment. Perhaps infra-red illumination or a “nightvision” low lux webcam might solve this problem.
As mentioned in the last blog entry, improvisation is a theme yet to be addressed. This workshop sought to evaluate whether two performers transformed at the same time might encourage improvisation.
The exercise was carried out in a performance space off site, which acted as a means of determining the portability of the system and also enabled a black backdrop to be tested as an alternative to the previous white projection surfaces.
The video below illustrates the two performers playfully improvising verbally whilst in opposite gender, alternative less idealised body types against a black back drop and physical dance-like improvisation.
Early observations suggest that enabling two transformed performers to appear on stage at the same time does not immediately result in improvisation. Perhaps this is unsurprising, placing two performers unfamiliar with improvisation on a stage without a script for them to work with or a scenario designed to encourage improvisation is likely to produced the same results.
Conversation about why there was a lack of immediate improvisation gave rise to a number of suggestions including the idea that the addition of a third element for the performers to work with would give the performers something to work with and encourage improvisation. The third element could take on a number of forms, the entry of a virtual character or perhaps a virtual object that the performers could pass to each other. We all felt that a game like scenario, the throwing of a virtual (or real) ball for instance would immediately encourage play and improvisation.
There are a variety of techniques and games designed to encourage improvisation, many of these can be found on the website Impro Encyclopedia. These techniques could be used as a basis for creating improvisational interactive scenarios using the iMorphia platform and adapted to exploit the power of virtual scenography and the interactive gaming potential inherent in the Unity Games Engine.
In order to explore the potential of interactive improvisational scenarios and game like performances it is envisaged that the next stage of the research will investigate the addition of interactive objects able to respond to the virtual projected iMorphia characters.
I joined the Interfaces for Performance group where we had a lively group discussion on notions of interface, HCI, Human Human interfaces with the idea of creating challenging, embarrassing and awkward interactive acts and interfaces. (inspired by Sabine Harrer and her work on awkward games)
The large group spilt into sub groups to develop individual and group sub projects. I worked with artist/performer/dancer Ruth Gibson of Igloo exploring the idea of a motion capture (Cinema Mocap) as a tool for improvised performance.
Playing on the idea of awkwardness, the hack demo was conceptualised as a game where two or more people would record a short awkward, challenging or embarrassing performance for the second person to try and copy or improvise.
Ruth’s initial performance involved rapid and complex movements and challenged the ability of the mocap system to record correctly, resulting in distorted limbs and inhuman movements. The glitches however inspired Ruth to produce a motion capture of an inhuman looking movement:
In the discussion after the demo it was suggested that the prototype resembled a motion capture version of the game of Exquisite Corpse, leading to discussions of how it could be developed into a game with scoring and also find application in serious games such as dance training.
The ability of capturing and replaying motion within the Unity Games Engine offers scope for further performance experiments and scripting opportunities for the development of an improvisation or practise tool.
The following video illustrates how expressive actions can be captured and re-represented by a male and a female Unity character.
Further research will be to investigate the difference between possessing a unity character – where it copies you – to being possessed by it – where you try and copy it. A convolution like algorithm could be used to generate a ‘coherence value’ indicating the closeness of the movements which could be used to give real time user feedback or generate a score. Generating real-time user feedback of the coherence value via colour or sound would result in the performer learning to copy and move in time with the movements of the character. Applications of coherence feedback might be in “serious games” such as dance practise, sports exercise and taichi.
The recent practical experiments were motivated by the desire to create a transformational experience for a performer (or performers) and their audience using multi-modal technology (projection, live responsive computer generated characters and Kinect body sensing).
A research question might be “Can a projected responsive avatar produce a sense of the uncanny in a performer and/or an audience?”
Classic research requires that this hypothesis be tested and validated, typically through user testing, questions and analysis. Rather than simply testing a hypothesis my personal preference is to discover how other performers react to the system, how it might be further developed and whether it has any value. To this end it is planned that a number of workshops will be held in approximately 8-10 weeks time, after a series of questions and planned scenarios have been developed – a workshop structure.
Meanwhile I do feel that this approach has a limited trajectory, it is not difficult to envisage how a more stable and believable system might be developed, one can imagine scenarios and short scenes of how it might be used. If this were an arts project with an intended public audience then I would be focussing on improving the quality and interactive responses of the system, developing scripts and and developing believable and engaging content.
However this is research, and I am feeling unsure exactly of how to balance theory and practice. Further, I am not entirely clear as to what is an appropriate research methodology given that my work and approach sits somewhere uncomfortably between Art Practice and Computer Science.
My feeling is that the Unity/Kinect model has reached an end and that other techniques need to be explored. If this were purely an arts practice led PhD then I believe that this would be a valid and acceptable mode of enquiry, that new techniques need to be tried without resorting to establishing a hypothesis and then testing it to determine its validity. I refer back now to my research proposal where I examined various research methods especially the Performative Research Manifesto envisaged by Brad Haseman.
Taking its name from J.L. Austin’s speech act theory, performative research stands as an alternative to the qualitative and quantitative paradigms by insisting on different approaches to designing, conducting and reporting research. The paper concludes by observing that once understood and fully theorised, the performative research paradigm will have applications beyond the arts and across the creative and cultural industries generally.
Two new interactive practise driven methodologies I wish to explore are:
1. The use of Augmented Reality as a means of creating invisible props that can respond to the performer.
These two methodologies encompass two very different underlying forms of improvisation and interaction – the first seeks to create a space for improvisation, the unexpected and magic, relying more on performer improvisation than system improvisation. The second methodology places more emphasis on system improvisation, where characters have more “life” and the performer has to respond or interact with independent agent based entities.
In order to establish the feasibility of whether a Virtual Human might be used in a performative context I have downloaded the Virtual Human Toolkit which integrates with the Unity framework. The toolkit appears to offer many unusual and interesting capabilities, voice recognition, gaze awareness and the creation of scripts to define responses to user interactions.