SAVLab

SAVLab: A Virtual Laboratory for Spatial Audio Research

Funding Program: 2019 Call for “Proyectos de Generación de Conocimiento”
Project Duration: June 2020 – February  2024 (3 years + 9 months extension)
Principal Investigator: Arcadio Reyes-Lecuona
Research team: Luis Molina-Tanco, Fabián Arrebola-Pérez, Antonio Díaz-Estrella, María Cuevas-Rodríguez, Daniel González-Toledo


Overview

The SAVLab project built on the open-source 3D Tune-In Audio Toolkit, advancing it into a reference virtual laboratory for spatial audio research. This project explored spatial audio rendering and its impact on sound perception, virtual sound source plausibility, and speech intelligibility.

Despite challenges posed by the COVID-19 pandemic, SAVLab successfully conducted experimental studies, developed new tools, and enhanced the usability and scope of the 3D Tune-In Toolkit, making significant contributions to psychoacoustic research and immersive auditory technology.


Key Results

  1. Research Studies:
    • Speech Intelligibility: Experiments demonstrated the importance of individualized Head-Related Transfer Functions (HRTFs) in improving spatial release from masking, even in complex scenarios like “cocktail party” settings.
    • Dynamic Localization: Studies validated the importance of dynamic cues for accurate sound localization in virtual environments.
  2. Tool Development:
    • Extended the 3D Tune-In Toolkit with:
      • Simulation of sound propagation delays.
      • Ambisonic encoding for spatial sound.
      • Early reflection modeling using the Image Source Method, shared as an open-source tool (GitHub link).
    • Supported integration with the Binaural Rendering Toolbox (BRT), developed in collaboration with the European SONICOM project.
  3. Publications
    • González-Toledo D; Cuevas-Rodríguez M; Vicente T; Picinali L; Molina-Tanco L; Reyes-Lecuona A

      Spatial release from masking in the median plane with non-native speakers using individual and mannequin head related transfer functions Journal Article

      In: The Journal of the Acoustical Society of America, vol. 155, no. 1, pp. 284-293, 2024, ISSN: 0001-4966.

      Abstract | Links | BibTeX

      Reyes-Lecuona A; Cuevas-Rodríguez M; González-Toledo D; Molina-Tanco L; Poirier-Quinot D; Picinali L

      Hearing loss and hearing aid simulations for accessible user experience Proceedings Article

      In: Proceedings of the XXIII International Conference on Human Computer Interaction, Association for Computing Machinery, Lleida, Spain, 2024, ISBN: 9798400707902.

      Abstract | Links | BibTeX

      Cuevas-Rodriguez M; González-Toledo D; Gutierrez-Parera P; Reyes-Lecuona A

      An experiment replication on dynamic 3D sound loalisation using auditory Virtual Reality Proceedings Article

      In: Proceedings of the 10th Convention of the European Acoustics Association, 2023.

      Links | BibTeX

      González-Toledo D; Molina-Tanco L; Cuevas-Rodríguez M; Majdak P; Reyes-Lecuona A

      The Binaural Rendering Toolbox. A Virtual Laboratory for Reproducible Research in Psychoacoustics Proceedings Article

      In: Proceedings of the 10th Convention of the European Acoustics Association, 2023.

      Links | BibTeX

      Montalvo-Gallego M B; Artero-Flores J; Cajigal-García A; Reyes-Lecuona A; Florido-Becerra S; Navarrete-Gálvez J J

      Experimentos de vista y oído, para una pieza de arte sonoro. Proceedings Article

      In: Actas del XXIII Congreso Internacional de Interacción Persona-Ordenador: INTERACCIÓN 2023, pp. 78–85, 2023.

      Abstract | Links | BibTeX

      Reyes-Lecuona A; Bouchara T; Picinali L

      Immersive Sound for XR Book Chapter

      In: Alcañiz, Jolanda G. Tromp Marco Sacco Mariano (Ed.): Chapter 4, Wiley, 2022, ISBN: 9781119865148.

      Abstract | Links | BibTeX

      Cuevas-Rodriguez M

      3D Binaural Spatialisation for Virtual Reality and Psychoacoustics PhD Thesis

      Universidad de Málaga, 2022.

      Abstract | Links | BibTeX

      Arrebola F; Gonzalez-Toledo D; Garcia-Jimenez P; Molina-Tanco L; Cuevas-Rodriguez M; Reyes-Lecuona A

      Simulación en tiempo real de las reflexiones tempranas mediante el método de las imágenes Proceedings Article

      In: 53 Congreso Español de Acústica -TECNIACUSTICA 2022., 2022.

      Abstract | Links | BibTeX

      Cuevas-Rodríguez M; González-Toledo D; Reyes-Lecuona A; Picinali L

      Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment Journal Article

      In: The Journal of the Acoustical Society of America, vol. 149, no. 4, pp. 2573–2586, 2021, ISSN: 0001-4966.

      Abstract | Links | BibTeX

      Márquez-Moncada A; Luis B H; González-Toledo D; Cuevas-Rodriguez M; Molina-Tanco L; Reyes-Lecuona A

      Influencia del Audio 3D en la Percepción de la Ganancia de Rotación en un Entorno Virtual . Un Estudio Piloto Journal Article

      In: Interacción 20/21, 2021.

      Links | BibTeX

      Reyes-Lecuona A; Cuevas-Rodriguez M; Gonzalez-Toledo D; Molina-Tanco L; Picinali L

      Speech perception in VR: do we need individual recordings? Proceedings Article

      In: pp. 1-1, 2021.

      Links | BibTeX

      Reyes-Lecuona A; Moncada A M; Bottcher H L; González-Toledo D; Cuevas-Rodr’iguez M; Molina-Tanco L

      Audio Binaural y Ganancia de Rotación en Entornos Virtuales Journal Article

      In: Revista de la Asociación Interacción Persona Ordenador (AIPO), vol. 2, no. 2, pp. 54–62, 2021.

      BibTeX


Open Tools and Resources


Legacy and Future Directions

SAVLab has laid the groundwork for advanced psychoacoustic research tools, integrating auditory models with VR/AR environments. The tools developed under SAVLab will continue to serve the scientific community as benchmarks for spatial audio research and immersive audio applications.