Tag Archives: interaction

18. Interactive Props and Physics

The video documentation below illustrates an enactment of iMorphia with props imbued with physics. The addition of rigid body colliders and physical materials to the props and the limbs of the avatar enables Unity to simulate in real time the physical collision of objects and the effects of gravity, weight and friction.

The physics simulation adds a degree of believability to the scene, as the character attempts to interact with the book and chair. The difficulty of control in attempting to make the character interact with the virtual props is evident, resulting in a somewhat comic effect as objects are accidentally knocked over.

Interaction with the physics imbued props produced unpredictable responses to performance participation, resulting in a dialogue between the virtual props and the performer. These participatory responses suggest that  physics imbued props produce a greater sense of engagement through enhancing the suspension of disbelief – the virtual props appear more believable and realistic than those that not imbued with physics.

This enactment once again highlights the problem of colocation between the performer, the projected character and the virtual props. Colocation issues are the result of the difficulty in perceiving where the character is in three dimensional space due to the lack of depth perception. There are also navigational problems resulting from an incongruity between the mapping of the position of the performers body and limbs in real space and those of the virtual characters avatar in virtual space.

17. Interactive Props

In this experimental enactment I created a minimalist stage like set consisting of a chair and a table on which rests a book.

props2

 

The video below illustrates some of the issues and problems associated with navigating the set and possible interactions between the projected character and the virtual objects.

Problems and issues:

1. Projected body mask and perspective
As the performer moves away from the kinect, the virtual character shrinks in size such that the projected body mask no longer matches the performer. Additional scripting to control the size of the avatar or altering the code in the camera script might compensate for these problems, though there may be issues associated with the differences between movements and perceived perspectives in the real and virtual spaces.

2. Colocation and feedback
The lack of three dimensional feedback in the video glasses results in the performer being unable to determine where the virtual character is in relationship to the virtual objects and thereby unable to successfully engage with the virtual objects in the scene.

3. Real/virtual interactions
There are issues associated with interactions between the virtual character and the virtual objects. In this demonstration objects can pass through each other. In the Unity games engine it is possible to add physical characteristics so that objects can push against each other, but how might this work? Can the table be pushed or should the character be stopped from moving? What are the appropriate physical dynamics between objects and characters? Should there be additional feedback, perhaps in the form of audio to represent tactile feedback when a character comes into contact with an object?

How might the book be picked up or dropped? Could the book be handed to another virtual character?

Rather than trying to create a realistic world where objects  and characters behave and  interact ‘normally’  might it be more appropriate and perhaps easier to go around the problems highlighted above and create surreal scenarios that do not mimic reality?

16. Participation, Conversation, Collaboration

Since the last enactment exploring navigation, I have been looking to implement performative interaction with virtual objects – the theatrical equivalent of props – in order to facilitate Dixon’s notions of participation, conversation and collaboration.

I envisaged implementing a system that would enable two performers to  interact with virtual props imbued with real world physical characteristics. This would then give rise to a variety of interactive scenarios – a virtual character might for instance choose and place a hat on the head of the other virtual character, pick up and place a glass onto a shelf or table, drop the glass such that it breaks, or collaboratively create or knock down a construction of virtual boxes. These types of scenarios are common in computer gaming, the challenge here however, would be to implement the human computer interfacing necessary to support natural unencumbered performative interaction.

This ambition raises a number of technical challenges, including the implementation of what is likely to be non-trivial scripting and the requirement of fast, accurate body and gesture tracking, perhaps using the Kinect 1.
There are also technical issues associated with the colocation of the performer and the virtual objects and the need for 3D visual feedback to the performer. These problems were encountered in the improvisation enactment with a virtual ball and discussed in  section “3. Depth and Interaction”  in post 14. Improvisation Workshop.

The challenges associated with implementing real world interaction with virtual 3D objects  are currently being met by Microsoft Research in their investigations of augmented reality through  prototype systems such as Mano-a-Mano and their latest project, the Hololens.

Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects.”

Microsoft HoloLens understands your gestures, gaze, and voice, enabling you to interact in the most natural way possible”

Reviews of the Hololens suggest natural interaction with the virtual using the body, gesture and voice is problematic, with issues of lag, and the misreading of gestures, similar to the problems I encountered during 15. Navigation.

“While voice controls worked, there was a lag between giving them and the hologram executing them. I had to say, “Let it roll!” to roll my spheres down the slides, and there was a one second or so pause before they took a tumble. It wasn’t major, but was enough to make me feel like I should repeat the command.

Gesture control was the hardest to get right, even though my gesture control was limited to a one-fingered downward swipe”

(TechRadar 6/10/2015)

During today’s  supervision meeting it was suggested that instead of trying to achieve the interactive fidelity I have been envisaging, which is likely to be technically challenging, that I work around the problem and exploit the limitations of what is possible using the current iMorphia system.

One suggestion was that of implementing a moving virtual wall which the performer has to interact with or respond to. This raises issues of how the virtual wall responds to or effects the virtual performer and then how the real performer responds. Is it a solid wall, can it pass through the virtual performer? Other real world physical characteristics might imbued in the virtual prop such as weight or lightness; leading to further performative interactions between  real performer, virtual performer and virtual object.

 

 

11. The Uncanny, Praxis and Intermediality

I have been reading a very in-depth study of The Uncanny  by Nicholas Royle (reviewed here) and the fascinating Freudian Robot by Lydia H. Liu, exploring relationships between Lacan, Claude Shannon, Cybernetics and The Uncanny.

In January I interviewed the intermedial perfomer Jo Scott who recently completed a practice based Phd at Central and also met with her supervisor Robin Nelson, publisher of the incredibly useful and informative Practice as Research in the Arts.

During the interview we discussed many aspects of practise as research, praxis, performance as research and research as performance, negotiating live performance, impro/comprovisation, and the use of technology as a performative medium.

We also talked about influences and other intermedial artists including Forced Entertainment, Laurie Anderson, artist Gary Hill and theatre company 1927.

One of the points both Jo and Robin made with regard to PaR was that rather than thinking or theorising,  one uses practise as a method for working through a problem. This notion struck a chord with my own struggles with where to go next with iMorphia. Rather than trying to analyse the research  to date and deduce a future direction,  it now feels more appropriate that I should practise my way forward.

The recorded interview has been transcribed will serve as a basis for informing the next phase of the practise based research.

 

Shana Moulton

Last night I witnessed the second performance by New York performer Shana Moulton at Primary in Nottingham who uses projections and live performance as a means of evoking and expressing her alter ego Clair.

shana primary

The image above illustrates how Shana uses projections to create a virtual set in which she performs. Her alter ego is projected onto the electrically operated armchair, which when set to lift by Shana operating a remote control, her alter ego projection rises and floats upwards and escapes through the projected stained glass ceiling.

shana primary2

Shana Moulton’s performative work successfully utilises  video projections to create engaging surreal darkly comic intermedial theatrical performances as the video below illustrates.

 

My New Robot Companion

Anna Dumitriu, director of Unnecessary Research and Alex May exhibit their surreal  and uncanny  Familiar Head at the Nesta Futurefest in March. Their website My New Robot Companion documents a residency in the Department of Computer Science at the University of Hertfordshire.

HARR1
HARR1 – with projected robotic face

There are resonances here – evoking the uncanny through projection, performance, installation and sensing technologies.

Alex has also written a free software tool “Painting with Light” which enable artists to experiment with projection mapping.

IMG_1246-682x1024
Video sculpture using projection mapping  and Painting with Light software exhibited at Kinetica Art Fair 2014

5. Research Review: Theory and Practice

The recent practical experiments were motivated by the desire to create a transformational experience for a performer (or performers) and their audience using multi-modal technology (projection, live responsive computer generated characters and Kinect body sensing).

A research question might be “Can a projected responsive avatar produce a sense of the uncanny in a performer and/or an audience?”

Classic research requires that this hypothesis be tested and validated, typically through user testing, questions and analysis. Rather than simply testing a hypothesis my personal preference is to discover how other performers react to the system, how it might be further developed and whether it has any value. To this end it is planned that a number of workshops will be held in approximately 8-10 weeks time, after a series of questions and planned scenarios have been developed – a workshop structure.

Meanwhile I do feel that this approach has a limited trajectory, it is not difficult to envisage how a more stable and believable system might be developed, one can imagine scenarios and short scenes of how it might be used. If this were an arts project with an intended public audience then I would be focussing on improving the quality and interactive responses of the system, developing scripts and and developing believable and engaging content.

However this is research, and I am feeling unsure exactly of how to balance theory and practice. Further, I am not entirely clear as to what is an appropriate research methodology given that my work and approach sits somewhere uncomfortably between Art Practice and Computer Science.

My feeling is that the Unity/Kinect model has reached an end and that other techniques need to be explored. If this were purely an arts practice led PhD then I believe that this would be a valid and acceptable mode of enquiry, that new techniques need to be tried without resorting to establishing a hypothesis and then testing it to determine its validity. I refer back now to my research proposal where I examined various research methods especially the Performative Research Manifesto envisaged by Brad Haseman.

Taking its name from J.L. Austin’s speech act theory, performative research stands as an alternative to the qualitative and quantitative paradigms by insisting on different approaches to designing, conducting and reporting research. The paper concludes by observing that once understood and fully theorised, the performative research paradigm will have applications beyond the arts and across the creative and cultural industries generally.

(Haseman 2006).

 

Two new interactive practise driven methodologies I wish to explore are:

1. The use of Augmented Reality as a means of creating invisible props that can respond to the performer.

2. The exploration of the performative use of Virtual Human technologies being developed by the Virtual Humans Research group at the Institute for Creative Technologies USC.

These two methodologies encompass two very different underlying forms of improvisation and interaction – the first seeks to create a space for improvisation, the unexpected and magic, relying more on performer improvisation than system improvisation. The second methodology places more emphasis on system improvisation, where characters have more “life” and the performer has to respond or interact with independent agent based entities.

In order to establish the feasibility of whether a Virtual Human might be used in a performative context I have downloaded the Virtual Human Toolkit which integrates with the Unity framework. The toolkit appears to offer many unusual and interesting capabilities, voice recognition, gaze awareness and the creation of scripts to define responses to user interactions.