Loading…
Back To Schedule
Thursday, October 28 • 9:00pm - Friday, December 3 • 6:00pm
Spatial auditory masking between real sound signals and virtual sound images

Log in to save this to your schedule, view media, leave feedback and see who's attending!

In augmented reality (AR) environment, audio signals of real world and virtual world are simultaneously presented to a listener. It is desirable that a virtual sound content and a real sound source do not interfere each other. In order to make it possible, we have examined spatial auditory masking between maskers and maskees, where maskers are real sound signals emitted from loudspeakers, and maskees are virtual sound images, generated by using head related transfer functions (HRTFs), emitted from headphones. Open-ear headphones were used for the experiment, which allow us to listen to the audio content while hearing the environmental sound. The results are very similar to those of the previous experiment [1, 2] where masker and maskee were both real signals emitted from loudspeakers. That is, with a given masker location, masking threshold levels as a function of maskee locations have symmetric property with respect to the frontal plane of a subject. Masking threshold level is, however, lowered than the previous experiment perhaps because of limitation of sound image localization by HRTFs. The results indicate that spatial auditory masking of human hearing occurs with virtually localized sound images in the same way as real sound signals.

Speakers
MN

Masayuki Nishiguchi

Akita Prefectural University
SI

Soma Ishihara

Akita Prefectural University
KW

Kanji Watanabe

Akita Prefectural University
KA

Koji Abe

Akita Prefectural University
ST

Shouichi Takane

Akita Prefectural University


Thursday October 28, 2021 9:00pm - Friday December 3, 2021 6:00pm EST
On-Demand