Loading…
E-Briefs on-demand [clear filter]
Thursday, October 28
 

9:00pm EDT

An Approach for Capturing Multi-Directional Radiation Characteristics of Sound Sources for 3D
In the general practice of immersive audio recording, there is a focus on how to capture the direct sound source. However, a sound source’s complex radiation properties help define that source in three-dimensional spaces. In this research, the idea of a “holographic sound recording” or HSR is explored. We choose the term “holographic” due to its uncanny ability to create a rendering of a real 3D sound source in space, similar to holographic visual experiences.

HSRs can be defined as the capturing of complex radiation characteristics of sound sources with the intention of being played back in either a virtual environment with six-degrees of freedom or in real life through a multi-directional coincidental speaker array. To research techniques on creating sonic holographic reproductions, two recording sessions were conducted at NYU in the summer of 2021. Through documenting and reflecting on the miking, mixing, and spatialization of the audio objects in Unity with Google Resonance, experimental realizations were made in search of best practices to consider when creating HSRs.

Concepts such as acoustical points of interest, adequate number of microphones, pickup patterns and angles will be explored, as well as capturing room tone and the benefits of player isolation. These aspects are introduced to create a holographic miking system called the Multi-Channel Pyramid Array (MPA) as a starting point for users who would like to create a holographic reproduction of any instrument. The MPA can theoretically be fine-tuned and customized based on the instrument and/or to the user’s desired results.

Speakers
MM

Michael Matsakis

New York University
avatar for Parichat Songmuang

Parichat Songmuang

Studio Manager/PhD Student, New York University
Parichat Songmuang graduated from New York University with her Master of Music degree in Music Technology at New York University and Advanced Certificate in Tonmeister Studies. As an undergraduate, she studied for her Bachelor of Science in Electronics Media and Film with a concentration... Read More →
PG

Paul Geluso

New York University


Thursday October 28, 2021 9:00pm - Friday December 3, 2021 6:00pm EST
On-Demand

9:00pm EDT

Developing plugins for your ears
We present a new intuitive development platform that allows algorithm developers to put plugins in our ears. The growing number of advanced audio processing plugins developed for DAWs is enabling highly creative sound experiences. We explain how plugins for DAWs can be easily ported to embedded platforms used in ear worn products and other audio devices. This includes signal processing targeting low latency, low power, high compute and large memory plugins. We describe an open platform to bring machine learning based algorithms directly to the end user. This will also give plugin developers access to data streams from additional sensors and multichannel audio data beyond stereo music streaming. The next generation of hearables for gaming, music, movies, AR/VR will require processing techniques currently only available to professionals in studios. This new platform allows end users to select, download and control plugins to unlock innovation the fits their individual needs and personal preferences.

Speakers
GA

Gary A. Spittle

Audicus Inc.


Thursday October 28, 2021 9:00pm - Friday December 3, 2021 6:00pm EST
On-Demand

9:00pm EDT

Production Tools for the MPEG-H Audio System
Next Generation Audio Systems, such as MPEG-H Audio, rely on metadata to enable a wide variety of features. Information such as channel layouts, the position and properties of audio objects or user interactivity options are only some of the data that can be used to improve consumer experience.
Creating these metadata requires suitable tools, which are used in a process known as "authoring", where interactive features and the options for 3D immersive sound rendering are defined by the content creator.
Different types of productions each impose their own requirements on these authoring tools, which lead to a number of solutions appearing in the market. Using the example of MPEG-H Audio, this paper will detail some of the latest developments and authoring solutions designed to enable immersive and interactive live-and post productions.

Speakers
YG

Yannik Grewe

Fraunhofer Institute for Integrated Circuits IIS
PE

Philipp Eibl

Group Manager Media Production Tools, Fraunhofer Institute for Integrated Circuits IIS
avatar for Daniela Rieger

Daniela Rieger

Research Associate / Sound Engineer, Fraunhofer Institute for Integrated Circuits IIS
avatar for Ulli Scuda

Ulli Scuda

Fraunhofer Institute for Integrated Circuits IIS


Thursday October 28, 2021 9:00pm - Friday December 3, 2021 6:00pm EST
On-Demand

9:00pm EDT

Real-time Implementation of the Spectral Division Method for Binaural Personal Audio Delivery with Head Tracking
A framework for implementing the Spectral Division Method (SDM) in real-time for delivering binaural personal audio to multiple listeners with head tracking is presented. The SDM, as an analytical approach for sound field reproduction, has been applied to generating personal audio filters that create acoustically bright and dark zones. However, only the case of static listening positions has been investigated. In realistic situations, the performance of such personal audio delivery systems will degrade significantly when the listeners move out of the "sweet spots". In order to achieve dynamic personal audio delivery that compensates for listeners' head movements, the SDM-based filters are updated in real-time through simple multiplications in the wavenumber domain, by utilizing the shifting theorem of the spatial Fourier transform along the x-axis. Furthermore, by selecting two spatial window functions targeted at two ears, the generated filters are able to deliver separate binaural personal audio to multiple listeners. The proposed framework offers an intuitive and efficient solution for binaural personal audio delivery with head tracking, at a moderate computation cost.

Speakers
avatar for Yue Qiao

Yue Qiao

PhD candidate, Princeton University
Yue is a fifth-year Ph.D. candidate at Princeton University’s 3D3A Lab, where he focused on personal sound zone rendering and spatial audio reproduction over loudspeakers. Since his undergraduate study, Yue has contributed to research projects on spatial audio, including Ambisonics... Read More →
EC

Edgar Choueiri

Princeton University


Thursday October 28, 2021 9:00pm - Friday December 3, 2021 6:00pm EST
On-Demand
 
  • Timezone
  • Filter By Date AES Fall Online Convention Oct 11 -31, 2021
  • Filter By Venue Online
  • Filter By Type
  • Acoustics & Psychoacoustics
  • Applications in Audio
  • Archiving & Restoration
  • Audio Builders Workshop
  • Audio for Cinema
  • Broadcast & Online Delivery
  • Diversity & Inclusion
  • E-Briefs on-demand
  • Education
  • Electronic Dance Music
  • Electronic Instrument Design & Applications
  • Game Audio & XR
  • Hip-Hop/R&B
  • Historical
  • Immersive Music
  • Immersive & Spatial Audio
  • Networked Audio
  • Papers on-demand
  • Recording & Production
  • Sound Reinforcement
  • Special Event
  • Tech Tours
  • Technical Committee Meeting
  • Company
  • Subject
  • Area


Filter sessions
Apply filters to sessions.