By pressing a mouse button, the new version enabled the instant changing of a character between male and female, plus the addition or not of tattoos. Rather than costuming the characters, I chose to create naked characters using the latest edition of MakeHuman. The idea of a person donning a white boiler suit over their clothes then appearing virtually naked I felt added an element of risk and surreal drama to the occasion.
Five visitors to the exhibition chose to experience iMorphia whilst a small audience watched the proceedings. Positive feedback from the participants and audience confirmed the effectiveness of the illusion in producing a strange and disturbing unworldly ghost like character. One person commenting that from a distance they thought they were watching a film, until they came closer and were surprised in realising the character was being projected onto and controlled in real time by a performer.
Recorded footage of iMorphia once again demonstrated how participants improvised around glitches produced by Kinect tracking errors. Laughter resulting from one of the participants breaking the tracking entirely by squatting down:
In the post “iMorphia – props and physics” it was noted “Co-location issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception.”
In this enactment the Oculus Rift VR headset is used as a means of ascertaining whether the added depth perception of the stereoscopic rendering of the Unity scene might assist in enabling a perfomer to locate the virtual props in 3D space.
Three enactments were carried out, two with the rendered viewpoint from the camera located from the audience perspective and one from the first person perspective typically used in VR and gaming.
The video below is a mobile phone recording of a computer monitor rendering the Unity scene in real time. The computer uses an i7 processor and a relatively powerful Nvidia GT 720 graphics card to deliver the stereoscopic rendering to the Oculus Rift. Though the system is able to support the new Kinect v2, the older Kinect was used in order to maintain continuity with previous enactments.
In the first enactment myself and one of the previous performers carried out the task of knocking the book off the table. We both felt that the task was much easier to accomplish with the stereoscopic depth enabling one to easily judge the position of the avatars hand in relationship to the virtual book.
Kinect tracking errors made bending the arm and precise control of the hand a little problematic. The task was felt to be much easier to achieve than previous enactments using the monoscopic video camera perspective as it was possible to clearly see where the virtual hand was, even if when it was ‘misbehaving’.
However with the added depth perception a new issue came to be highlighted that was previously unnoticed, that of difficulties in knowing front from back. When one moves ones hand forward it moves away from you, whilst when viewed from the camera perspective the hand moves nearer to the camera, the opposite direction to which one is used to. This effect parallels the left right reversal of a mirror in comparison to the camera view. In both cases through practice it is possible to become accustomed to the depth reversal and lack of mirror reversal, though at first one finds oneself moving in the opposite direction, or using the opposite limb, It is possible to technically produce a mirror reversal, but a depth reversal was felt to be more problematic. A simpler solution, easily achievable using VR was to give the performer the same first person perspective as one is normally used to – seeing the scene from the viewpoint of the avatar. In the video, the third enactment carried out by myself demonstrates this perspective.
Due to time constraints it was not possible to test this enactment with the external participant. However despite the incredibly immersive qualities of the first person perspective, I felt there are some serious problems resulting from this viewpoint.
Firstly I felt a very strange out of the body experience looking down at a virtual body that was not mine, in addition my virtual limbs and my height were completely different to my own and this produced a strong sense of disorientation. Perhaps a male body of similar height and dimensions to my own might have felt more familiar.
The task of knocking the book over felt extremely easy as I could see my virtual hand in relationship to the book from a familiar first person perspective. Despite Kinect tracking issues, it was possible to correct the position of the hand and ultimately knocking the book over was easy to achieve. Both the issues of depth and mirror reversal were removed using this perspective.
However walking and moving in the scene resulted in a strong degree of vertigo and dizziness. For the first time I experienced “VR motion sickness” and nearly fell over. It was extremely unpleasant!
Further, after taking the headset off, for some minutes I still felt disorientated, somewhat dizzy and a little out of touch with reality.
Although the first person perspective should have felt the most natural, it also produced disturbing side effects which if not rectified would make the first person VR perspective unusable if not hazardous in a live performance context.
The feelings of vertigo and motion sickness may well have been exaggerated due to Kinect tracking issues, with the avatar body moving haphazardly resulting in a disconnect between the viewpoint rendered by the avatars perspective and that of where my real head thought it was.
Two further practical considerations are: i) the VR headset is tethered by two cables making it difficult to move feely and safely and ii) the headset being enclosed felt somewhat hot after a short period of time. Light, ‘breathable’ wireless VR headsets may solve these problems, but the effects of vertigo resulting from the first person perspective whilst moving in 3D space and feeling as if one is in another body are perhaps more problematic.
The simplest solution, though still with the depth reversal issue, is removing the VR tracking and to create a fixed virtual camera giving the audience perspective, parallel to the previous methodology of relaying the audience perspective through a video camera mounted on a tripod.
Before dismissing the VR first person perspective being the sole cause of motion sickness, it is planned that a further test be carried out using the more accurate Kinect v2 with a virtual body of proportions similar to my own. It is envisaged that the Kinect v2 would result in a more stable first person perspective and with a more familiar viewpoint as one I am used to with my natural body.
In addition other gaming like perspectives might also be tried, the third person perspective for instance, with a virtual camera located just above and behind the avatar.
A key realisation is that the performers perspective need not necessarily be that of the audience, that the iMorphia system might render two (or possibly more) perspectives – one for the audience – the projected scene, and one for the performer. The projected scene being designed to produce the appropriate suspension of disbelief for the audience, whilst the performer’s perspective designed to enable the performer to perform efficiently such that the audience believes the performer to be immersed and present in the virtual scene.