Tag Archives: colocation

18. Interactive Props and Physics

The video documentation below illustrates an enactment of iMorphia with props imbued with physics. The addition of rigid body colliders and physical materials to the props and the limbs of the avatar enables Unity to simulate in real time the physical collision of objects and the effects of gravity, weight and friction.

The physics simulation adds a degree of believability to the scene, as the character attempts to interact with the book and chair. The difficulty of control in attempting to make the character interact with the virtual props is evident, resulting in a somewhat comic effect as objects are accidentally knocked over.

Interaction with the physics imbued props produced unpredictable responses to performance participation, resulting in a dialogue between the virtual props and the performer. These participatory responses suggest that  physics imbued props produce a greater sense of engagement through enhancing the suspension of disbelief – the virtual props appear more believable and realistic than those that not imbued with physics.

This enactment once again highlights the problem of colocation between the performer, the projected character and the virtual props. Colocation issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception. There are also navigational problems resulting from an incongruity between the mapping of the position of the performers body and limbs in real space and those of the virtual characters avatar in virtual space.

17. Interactive Props

In this experimental enactment I created a minimalist stage like set consisting of a chair and a table on which rests a book.

props2

 

The video below illustrates some of the issues and problems associated with navigating the set and possible interactions between the projected character and the virtual objects.

Problems and issues:

1. Projected body mask and perspective
As the performer moves away from the kinect, the virtual character shrinks in size such that the projected body mask no longer matches the performer. Additional scripting to control the size of the avatar or altering the code in the camera script might compensate for these problems, though there may be issues associated with the differences between movements and perceived perspectives in the real and virtual spaces.

2. Colocation and feedback
The lack of three dimensional feedback in the video glasses results in the performer being unable to determine where the virtual character is in relationship to the virtual objects and thereby unable to successfully engage with the virtual objects in the scene.

3. Real/virtual interactions
There are issues associated with interactions between the virtual character and the virtual objects. In this demonstration objects can pass through each other. In the Unity games engine it is possible to add physical characteristics so that objects can push against each other, but how might this work? Can the table be pushed or should the character be stopped from moving? What are the appropriate physical dynamics between objects and characters? Should there be additional feedback, perhaps in the form of audio to represent tactile feedback when a character comes into contact with an object?

How might the book be picked up or dropped? Could the book be handed to another virtual character?

Rather than trying to create a realistic world where objects  and characters behave and  interact ‘normally’  might it be more appropriate and perhaps easier to go around the problems highlighted above and create surreal scenarios that do not mimic reality?

16. Participation, Conversation, Collaboration

Since the last enactment exploring navigation, I have been looking to implement performative interaction with virtual objects – the theatrical equivalent of props – in order to facilitate Dixon’s notions of participation, conversation and collaboration.

I envisaged implementing a system that would enable two performers to  interact with virtual props imbued with real world physical characteristics. This would then give rise to a variety of interactive scenarios – a virtual character might for instance choose and place a hat on the head of the other virtual character, pick up and place a glass onto a shelf or table, drop the glass such that it breaks, or collaboratively create or knock down a construction of virtual boxes. These types of scenarios are common in computer gaming, the challenge here however, would be to implement the human computer interfacing necessary to support natural unencumbered performative interaction.

This ambition raises a number of technical challenges, including the implementation of what is likely to be non-trivial scripting and the requirement of fast, accurate body and gesture tracking, perhaps using the Kinect 1.
There are also technical issues associated with the colocation of the performer and the virtual objects and the need for 3D visual feedback to the performer. These problems were encountered in the improvisation enactment with a virtual ball and discussed in  section “3. Depth and Interaction”  in post 14. Improvisation Workshop.

The challenges associated with implementing real world interaction with virtual 3D objects  are currently being met by Microsoft Research in their investigations of augmented reality through  prototype systems such as Mano-a-Mano and their latest project, the Hololens.

Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects.”

Microsoft HoloLens understands your gestures, gaze, and voice, enabling you to interact in the most natural way possible”

Reviews of the Hololens suggest natural interaction with the virtual using the body, gesture and voice is problematic, with issues of lag, and the misreading of gestures, similar to the problems I encountered during 15. Navigation.

“While voice controls worked, there was a lag between giving them and the hologram executing them. I had to say, “Let it roll!” to roll my spheres down the slides, and there was a one second or so pause before they took a tumble. It wasn’t major, but was enough to make me feel like I should repeat the command.

Gesture control was the hardest to get right, even though my gesture control was limited to a one-fingered downward swipe”

(TechRadar 6/10/2015)

During today’s  supervision meeting it was suggested that instead of trying to achieve the interactive fidelity I have been envisaging, which is likely to be technically challenging, that I work around the problem and exploit the limitations of what is possible using the current iMorphia system.

One suggestion was that of implementing a moving virtual wall which the performer has to interact with or respond to. This raises issues of how the virtual wall responds to or effects the virtual performer and then how the real performer responds. Is it a solid wall, can it pass through the virtual performer? Other real world physical characteristics might imbued in the virtual prop such as weight or lightness; leading to further performative interactions between  real performer, virtual performer and virtual object.