Groupe Audio Acoustique

Wiki du groupe de recherche en audio et acoustique du LIMSI

Outils pour utilisateurs

Outils du site


projets:virtualtheatre

Virtual Theatre

People @ LIMSI:

This work was funded in part by the ANR-ECHO project (ANR-13-CULT-0004). Partners include THALIM/ARIAS-CNRS, Bibliothèque nationale de France (BnF), and LIMSI-CNRS.


Project Description

In the context of the ECHO project an immersive virtual reality experience was designed. It combined a point cloud recording of a person who recited an Italian poem with a visual and acoustical model of the Theatre Athenee.

In order to come to this VR experience, first seperatly a visual and audio recording were made in an anechoic room. Using the virtual reality platform BlenderVR, the visual part of this recording was integrated into a visual model of the Theatre Athenee. Additionally, a geometrical acoustics (GA) model of the Theatre Athenee was created and calibrated. The resulting impulse responses were convolved in real-time with the audio of the anechoic recordings using the audio platform Max/MSP. Finally, in the MEMAVE room the visual and audio part, were integrated into an immersive rendering.

This work combines research efforts in the digital heritage acoustic recreations of the ANR-ECHO project, and the development of interactive virtual reality environments in the BlenderVR project.


Recording of the point cloud

The visuals of a person reciting an Italian poem were recorded using a KINECT in the anechoic room at IRCAM, resulting in a point cloud. The point cloud was rendered in BlenderVR using the blendervr-cloud shader by Dalai Felinto. The sound was recorded using an omnidirectional microphone.


Visual model

The visual model was provided by Dominique Lemaire, director of technique at the Theatre Athenee. It has been created by Francis Cousson.


Calibration of the GA model

A GA model of the Theatre Athenee was created using the software CATT-Acoustic (v.9.0.c, TUCT v1.1a). In order to create realistic auralization this model needed to be calibrated. This commenced with acoustic measurements which served as a reference (see Figure below).

ASCII

Using the measurement results, the GA model was calibrated according to a methodical procedure, detailed in [1]. The figure below depicts its geometry.

view4.jpg

Auralizations were created convolving the anechoic audio recordings with impulse responses from measurement and simulation. The calibration was validated by comparing these auralizations in a listening test. As the acoustics were perceived sufficiently similar at a number of positions, we have confidence that listening at other positions provide a realistic virtual reconstruction.

These stereophonic audio extracts represent a person reciting an Italian poem on stage as heard from a position in the audience.


Integration of the acoustic and visual model, as well as point cloud

Next step was to integrate the acoustic and visual model with the point cloud. This was done in the room MEMAVE at LIMSI.

The visual part was presented on a large screen in the room (see Fig below). Tracking information, obtained from the tracking system present in the room, was analyzed by BlenderVR and sent to the audio platform Max MSP. Translation information was employed to update the visual part and the rotation information was used to update the audio.

memave-setup.jpeg

In order to make the 3D visual rendering more immersive, two screens perpendicular to the current screen will be added on which additional visuals will be projected.


First result

The audio part of the 3D virtual reconstruction can be explored interactively with headphones. As the MEMAVE room is equipped with a a 32-channel ambisonics system the audio part can also be explored in Ambisonics.

The following video presents a recording of the created application.


Second stage

With the addition of 2 more screens, a highly enveloping simulation is possible.

The addition of recorded actors puts the simulation into context.

The following video presents a recording of the created application using a fisheye lens to see the full system form the users position.


[1] B. N. Postma and B. F. Katz, “Creation and calibration method of virtual acoustic models for historic auralizations,” Virtual Reality, vol. 19, no. SI: Spatial Sound, pp. 161–180, 2015, doi:10.1007/s10055-015-0275-3.

[2] B. N. Postma, A. Tallon, and B. F. Katz, “Calibrated auralization simulation of the abbey of Saint-Germain-des-Prés for historical study,” in Intl. Conf. on Auditorium Acoustics, vol. 37, (Paris), pp. 190–197, Institute of Acoustics, Oct. 2015, URL.

[3] B. F. Katz, D. Q. Felinto, D. Touraine, D. Poirier-Quinot, and P. Bourdot, “BlenderVR: Open-source framework for interactive and immersive VR,” in IEEE Virtual Reality (IEEE VR), (Arles), pp. 203–204, Mar. 2015, URL.

[4] B. N. Postma and B. F. G. Katz, “Correction method for averaging slowly time-variant room impulse response measurements,” J. Acoust. Soc. Am., vol. 140, pp. EL38–43, July 2016, doi:10.1121/1.4955006.


projets/virtualtheatre.txt · Dernière modification: 2016/07/22 10:50 par Brian Katz