Tag Archives: co-location

20. Co-location and the Oculus Rift

In post 18. Interactive Props and Physics it was noted “Co-location issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception.”

In this enactment the Oculus Rift VR headset is used as a means of ascertaining whether the added depth perception of the stereoscopic rendering of the Unity scene might assist in enabling a perfomer to locate the virtual props in 3D space.

Three enactments were carried out, two with  the rendered viewpoint from the camera located from the audience perspective and one from the first person perspective typically used in VR and gaming.

The video below is a mobile phone recording of a computer monitor rendering the Unity scene in real time. The computer uses an i7 processor and a relatively powerful Nvidia GT 720 graphics card to deliver the stereoscopic rendering to the Oculus Rift. Though the system is able to support the new Kinect v2, the older Kinect was used in order to maintain continuity with previous enactments.

In the first enactment myself and one of the previous performers carried out the task of knocking the book off the table. We both felt that the task was much easier to accomplish with the stereoscopic depth enabling one to easily judge the position of the avatars hand in relationship to the virtual book.

Kinect tracking errors made bending the arm and precise control of the hand a little problematic. The task was felt to be much easier to achieve than  previous enactments using the monoscopic video camera perspective as it was possible to clearly see where the virtual hand was, even if when it was ‘misbehaving’.

However with the added depth perception a new issue came to be highlighted that was previously unnoticed, that of difficulties in knowing  front from back. When one moves ones hand forward it moves away from you, whilst when viewed from the camera perspective the hand moves nearer to the camera, the opposite direction to which one is used to. This effect parallels the left right reversal of a mirror in comparison to the camera view. In both cases through practice it is possible to become accustomed to the depth reversal and lack of mirror reversal, though at first one finds oneself moving in the opposite direction, or using the opposite limb, It is possible to technically produce a mirror reversal, but a depth reversal was felt to be more problematic. A simpler solution, easily achievable using VR was to give the performer the same first person perspective as one is normally used to – seeing the scene from the viewpoint of the avatar.  In the video, the third enactment  carried out by myself demonstrates this perspective.

Due to time constraints it was not possible to test this enactment with the external participant. However despite the incredibly immersive qualities of the first person perspective, I felt there are some serious problems resulting from this viewpoint.

Firstly I felt a very strange out of the body experience looking down at a virtual body that was not mine, in addition my virtual limbs and my height were completely different to my own and this produced a strong sense of disorientation. Perhaps a male body of similar height and dimensions to my own might have felt more familiar.

The task of  knocking the book over felt extremely easy as I could see my virtual hand in relationship to the book from a familiar first person perspective. Despite Kinect tracking issues, it was possible to correct the position of the hand and ultimately knocking the book over was easy to achieve. Both the issues of depth and mirror reversal were removed using this perspective.

However walking and moving in the scene resulted in a strong degree of vertigo and dizziness. For the first time I experienced “VR motion sickness” and nearly fell over. It was extremely unpleasant!

Further, after taking the headset off, for some minutes I still felt disorientated, somewhat dizzy and a little out of touch with reality.
Although the first person perspective should have felt the most natural, it also produced disturbing side effects which if not rectified would make the first person VR perspective unusable if not hazardous in a live performance context.

The feelings of vertigo and motion sickness may well have been exaggerated due to Kinect tracking issues, with the avatar body moving haphazardly resulting in a disconnect between the viewpoint rendered by the avatars perspective and that of where my real head thought it was.

Two further practical considerations are:  i) the VR headset is tethered by two cables  making it difficult to move feely and safely and ii) the headset being enclosed felt somewhat hot after a short period of time. Light, ‘breathable’ wireless VR headsets may solve these problems, but the effects of vertigo resulting from the first person perspective whilst moving in 3D space and feeling as if one is in another body are perhaps more problematic.

The simplest solution, though still with the depth reversal issue, is removing the VR tracking and to create a fixed virtual camera giving the audience perspective, parallel to the previous methodology of relaying the audience perspective through a video camera mounted on a tripod.

Before dismissing the VR first person perspective being the sole cause of motion sickness, it is planned that a further test be carried out using the more accurate Kinect v2 with a virtual body of proportions similar to my own. It is envisaged that the Kinect v2 would result in a more stable first person perspective and with a more familiar viewpoint as one I am used to with my natural body.

In addition other gaming like perspectives might also be tried, the third person perspective for instance, with a virtual camera located just above and behind the avatar.

A key realisation is that the performers perspective need not necessarily be that of the audience, that the iMorphia system might render two  (or possibly more) perspectives – one for the audience – the projected scene, and one for the performer. The projected scene being designed to produce the appropriate suspension of disbelief for the audience, whilst the performer’s perspective designed to enable the performer to perform efficiently such that the audience believes the performer to be immersed and present in the virtual scene.

 

18. Interactive Props and Physics

The video documentation below illustrates an enactment of iMorphia with props imbued with physics. The addition of rigid body colliders and physical materials to the props and the limbs of the avatar enables Unity to simulate in real time the physical collision of objects and the effects of gravity, weight and friction.

The physics simulation adds a degree of believability to the scene, as the character attempts to interact with the book and chair. The difficulty of control in attempting to make the character interact with the virtual props is evident, resulting in a somewhat comic effect as objects are accidentally knocked over.

Interaction with the physics imbued props produced unpredictable responses to performance participation, resulting in a dialogue between the virtual props and the performer and a degree of improvisation – for example arms raised in frustration and the kicking over of the chair. These participatory responses suggest that  physics imbued props produce a greater sense of engagement through enhancing the suspension of disbelief – the virtual props appear more believable and realistic than those that not imbued with physics.

This enactment once again highlights the problem of co-location between the performer, the projected character and the virtual props. Co-location issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception. There are also navigational problems resulting from an incongruity between the mapping of the position of the performers body and limbs in real space and those of the virtual characters avatar in virtual space.

17. Interactive Props

In this experimental enactment I created a minimalist stage like set consisting of a chair and a table on which rests a book.

props2

 

The video below illustrates some of the issues and problems associated with navigating the set and possible interactions between the projected character and the virtual objects.

Problems and issues:

1. Projected body mask and perspective
As the performer moves away from the kinect, the virtual character shrinks in size such that the projected body mask no longer matches the performer. Additional scripting to control the size of the avatar or altering the code in the camera script might compensate for these problems, though there may be issues associated with the differences between movements and perceived perspectives in the real and virtual spaces.

2. Co-location and feedback
The lack of three dimensional feedback in the video glasses results in the performer being unable to determine where the virtual character is in relationship to the virtual objects and thereby unable to successfully engage with the virtual objects in the scene.

3. Real/virtual interactions
There are issues associated with interactions between the virtual character and the virtual objects. In this demonstration objects can pass through each other. In the Unity games engine it is possible to add physical characteristics so that objects can push against each other, but how might this work? Can the table be pushed or should the character be stopped from moving? What are the appropriate physical dynamics between objects and characters? Should there be additional feedback, perhaps in the form of audio to represent tactile feedback when a character comes into contact with an object?

How might the book be picked up or dropped? Could the book be handed to another virtual character?

Rather than trying to create a realistic world where objects  and characters behave and  interact ‘normally’  might it be more appropriate and perhaps easier to go around the problems highlighted above and create surreal scenarios that do not mimic reality?