SAVLab: A Virtual Laboratory for Spatial Audio Research
Funding Program: 2019 Call for “Proyectos de Generación de Conocimiento”
Project Duration: June 2020 – February 2024 (3 years + 9 months extension)
Principal Investigator: Arcadio Reyes-Lecuona
Research team: Luis Molina-Tanco, Fabián Arrebola-Pérez, Antonio Díaz-Estrella, María Cuevas-Rodríguez, Daniel González-Toledo
Overview
The SAVLab project built on the open-source 3D Tune-In Audio Toolkit, advancing it into a reference virtual laboratory for spatial audio research. This project explored spatial audio rendering and its impact on sound perception, virtual sound source plausibility, and speech intelligibility.
Despite challenges posed by the COVID-19 pandemic, SAVLab successfully conducted experimental studies, developed new tools, and enhanced the usability and scope of the 3D Tune-In Toolkit, making significant contributions to psychoacoustic research and immersive auditory technology.
Key Results
- Research Studies:
- Speech Intelligibility: Experiments demonstrated the importance of individualized Head-Related Transfer Functions (HRTFs) in improving spatial release from masking, even in complex scenarios like “cocktail party” settings.
- Dynamic Localization: Studies validated the importance of dynamic cues for accurate sound localization in virtual environments.
- Tool Development:
- Extended the 3D Tune-In Toolkit with:
- Simulation of sound propagation delays.
- Ambisonic encoding for spatial sound.
- Early reflection modeling using the Image Source Method, shared as an open-source tool (GitHub link).
- Supported integration with the Binaural Rendering Toolbox (BRT), developed in collaboration with the European SONICOM project.
- Extended the 3D Tune-In Toolkit with:
- Publications
González-Toledo D; Cuevas-Rodríguez M; Vicente T; Picinali L; Molina-Tanco L; Reyes-Lecuona A
Spatial release from masking in the median plane with non-native speakers using individual and mannequin head related transfer functions Journal Article
In: The Journal of the Acoustical Society of America, vol. 155, no. 1, pp. 284-293, 2024, ISSN: 0001-4966.
@article{10.1121/10.0024239,
title = {Spatial release from masking in the median plane with non-native speakers using individual and mannequin head related transfer functions},
author = {Daniel González-Toledo and María Cuevas-Rodríguez and Thibault Vicente and Lorenzo Picinali and Luis Molina-Tanco and Arcadio Reyes-Lecuona},
url = {https://doi.org/10.1121/10.0024239},
doi = {10.1121/10.0024239},
issn = {0001-4966},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
journal = {The Journal of the Acoustical Society of America},
volume = {155},
number = {1},
pages = {284-293},
abstract = {Spatial release from masking (SRM) in speech-on-speech tasks has been widely studied in the horizontal plane, where interaural cues play a fundamental role. Several studies have also observed SRM for sources located in the median plane, where (monaural) spectral cues are more important. However, a relatively unexplored research question concerns the impact of head-related transfer function (HRTF) personalisation on SRM, for example, whether using individually-measured HRTFs results in better performance if compared with the use of mannequin HRTFs. This study compares SRM in the median plane in a speech-on-speech virtual task rendered using both individual and mannequin HRTFs. SRM is obtained using English sentences with non-native English speakers. Our participants show lower SRM performances compared to those found by others using native English participants. Furthermore, SRM is significantly larger when the source is spatialised using the individual HRTF, and this effect is more marked for those with lower English proficiency. Further analyses using a spectral distortion metric and the estimation of the better-ear effect, show that the observed SRM can only partially be explained by HRTF-specific factors and that the effect of the familiarity with individual spatial cues is likely to be the most significant element driving these results.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Spatial release from masking (SRM) in speech-on-speech tasks has been widely studied in the horizontal plane, where interaural cues play a fundamental role. Several studies have also observed SRM for sources located in the median plane, where (monaural) spectral cues are more important. However, a relatively unexplored research question concerns the impact of head-related transfer function (HRTF) personalisation on SRM, for example, whether using individually-measured HRTFs results in better performance if compared with the use of mannequin HRTFs. This study compares SRM in the median plane in a speech-on-speech virtual task rendered using both individual and mannequin HRTFs. SRM is obtained using English sentences with non-native English speakers. Our participants show lower SRM performances compared to those found by others using native English participants. Furthermore, SRM is significantly larger when the source is spatialised using the individual HRTF, and this effect is more marked for those with lower English proficiency. Further analyses using a spectral distortion metric and the estimation of the better-ear effect, show that the observed SRM can only partially be explained by HRTF-specific factors and that the effect of the familiarity with individual spatial cues is likely to be the most significant element driving these results.Reyes-Lecuona A; Cuevas-Rodríguez M; González-Toledo D; Molina-Tanco L; Poirier-Quinot D; Picinali L
Hearing loss and hearing aid simulations for accessible user experience Proceedings Article
In: Proceedings of the XXIII International Conference on Human Computer Interaction, Association for Computing Machinery, Lleida, Spain, 2024, ISBN: 9798400707902.
@inproceedings{10.1145/3612783.3612816,
title = {Hearing loss and hearing aid simulations for accessible user experience},
author = {Arcadio Reyes-Lecuona and María Cuevas-Rodríguez and Daniel González-Toledo and Luis Molina-Tanco and David Poirier-Quinot and Lorenzo Picinali},
url = {https://doi.org/10.1145/3612783.3612816},
doi = {10.1145/3612783.3612816},
isbn = {9798400707902},
year = {2024},
date = {2024-01-01},
urldate = {2024-01-01},
booktitle = {Proceedings of the XXIII International Conference on Human Computer Interaction},
publisher = {Association for Computing Machinery},
address = {Lleida, Spain},
series = {Interaccion '23},
abstract = {This paper presents an open-source real-time hearing loss and hearing aids simulator implemented within the 3D Tune-In Toolkit C++ library. These simulators provide a valuable tool for improving auditory accessibility, promoting inclusivity and foster new research. The hearing loss simulator accurately simulates various types and levels of hearing loss, while the hearing aid simulator replicates different hearing aid technologies, allowing for the simulation of real-world hearing aid experiences. Both simulators are implemented to work in real-time, allowing for immediate feedback and adjustment during testing and development. As an open-source tool, the simulators can be customised and modified to meet specific needs, and the scientific community can collaborate and improve upon the algorithms. The technical details of the simulators and their implementation in the C++ library are presented, and the potential applications of the simulators are discussed, showing that they can be used as a valuable support software for UX designers to ensure the accessibility of their products to individuals with hearing impairment. Moreover, these simulators can be used to raise awareness about auditory accessibility issues. Overall, this paper also aims to provide some insight into the development and implementation of accessible technology for individuals with hearing impairments.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
This paper presents an open-source real-time hearing loss and hearing aids simulator implemented within the 3D Tune-In Toolkit C++ library. These simulators provide a valuable tool for improving auditory accessibility, promoting inclusivity and foster new research. The hearing loss simulator accurately simulates various types and levels of hearing loss, while the hearing aid simulator replicates different hearing aid technologies, allowing for the simulation of real-world hearing aid experiences. Both simulators are implemented to work in real-time, allowing for immediate feedback and adjustment during testing and development. As an open-source tool, the simulators can be customised and modified to meet specific needs, and the scientific community can collaborate and improve upon the algorithms. The technical details of the simulators and their implementation in the C++ library are presented, and the potential applications of the simulators are discussed, showing that they can be used as a valuable support software for UX designers to ensure the accessibility of their products to individuals with hearing impairment. Moreover, these simulators can be used to raise awareness about auditory accessibility issues. Overall, this paper also aims to provide some insight into the development and implementation of accessible technology for individuals with hearing impairments.Cuevas-Rodriguez M; González-Toledo D; Gutierrez-Parera P; Reyes-Lecuona A
An experiment replication on dynamic 3D sound loalisation using auditory Virtual Reality Proceedings Article
In: Proceedings of the 10th Convention of the European Acoustics Association, 2023.
@inproceedings{ForumAcusticum2023Experiment,
title = {An experiment replication on dynamic 3D sound loalisation using auditory Virtual Reality},
author = {Maria Cuevas-Rodriguez and Daniel González-Toledo and Pablo Gutierrez-Parera and Arcadio Reyes-Lecuona},
url = {https://dael.euracoustics.org/confs/landing_pages/fa2023/000744.html},
year = {2023},
date = {2023-09-01},
urldate = {2023-09-01},
booktitle = {Proceedings of the 10th Convention of the European Acoustics Association},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
González-Toledo D; Molina-Tanco L; Cuevas-Rodríguez M; Majdak P; Reyes-Lecuona A
The Binaural Rendering Toolbox. A Virtual Laboratory for Reproducible Research in Psychoacoustics Proceedings Article
In: Proceedings of the 10th Convention of the European Acoustics Association, 2023.
@inproceedings{ForumAcusticum2023BRT,
title = {The Binaural Rendering Toolbox. A Virtual Laboratory for Reproducible Research in Psychoacoustics},
author = {Daniel González-Toledo and Luis Molina-Tanco and María Cuevas-Rodríguez and Piotr Majdak and Arcadio Reyes-Lecuona},
url = {https://dael.euracoustics.org/confs/landing_pages/fa2023/001042.html},
year = {2023},
date = {2023-09-01},
urldate = {2023-09-01},
booktitle = {Proceedings of the 10th Convention of the European Acoustics Association},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Montalvo-Gallego M B; Artero-Flores J; Cajigal-García A; Reyes-Lecuona A; Florido-Becerra S; Navarrete-Gálvez J J
Experimentos de vista y oído, para una pieza de arte sonoro. Proceedings Article
In: Actas del XXIII Congreso Internacional de Interacción Persona-Ordenador: INTERACCIÓN 2023, pp. 78–85, 2023.
@inproceedings{montalvo2023experimentos,
title = {Experimentos de vista y oído, para una pieza de arte sonoro.},
author = {M. Blanca Montalvo-Gallego and Javier Artero-Flores and Alberto Cajigal-García and Arcadio Reyes-Lecuona and Sergio Florido-Becerra and Juan José Navarrete-Gálvez},
url = {https://interaccion2023.udl.cat/wp-content/uploads/2023/09/Libro_de_Actas_INTERACCCION_2023_LLEIDA_v1.pdf},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
booktitle = {Actas del XXIII Congreso Internacional de Interacción Persona-Ordenador: INTERACCIÓN 2023},
pages = {78–85},
abstract = {Presentamos la versión 3.0 del proyecto en proceso Imagen imperfecta. A través de esta investigación exploramos la visión periférica del individuo, que se encuentra en los límites de la percepción visual. En esta última versión, además, incorporamos a la experimentación un sistema de audio 3D binaural. Esta implementación redirige la investigación, lo que nos desplaza del arte electrónico al arte sonoro. Para llevar a cabo este proyecto, rediseñamos un prototipo de casco de visión periférica para su acoplamiento a unos auriculares de diadema que el usuario ha de portar para transitar la obra: una instalación compuesta por paneles LEDs animados. Por último, nos planteamos una serie de mejoras técnicas (reducción de cableado, baterías, ergonomía del casco) que permitan ofrecer una experiencia óptima al usuario, al tiempo que analizamos las variaciones generadas en el proyecto, a nivel técnico, perceptual y conceptual.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Presentamos la versión 3.0 del proyecto en proceso Imagen imperfecta. A través de esta investigación exploramos la visión periférica del individuo, que se encuentra en los límites de la percepción visual. En esta última versión, además, incorporamos a la experimentación un sistema de audio 3D binaural. Esta implementación redirige la investigación, lo que nos desplaza del arte electrónico al arte sonoro. Para llevar a cabo este proyecto, rediseñamos un prototipo de casco de visión periférica para su acoplamiento a unos auriculares de diadema que el usuario ha de portar para transitar la obra: una instalación compuesta por paneles LEDs animados. Por último, nos planteamos una serie de mejoras técnicas (reducción de cableado, baterías, ergonomía del casco) que permitan ofrecer una experiencia óptima al usuario, al tiempo que analizamos las variaciones generadas en el proyecto, a nivel técnico, perceptual y conceptual.Reyes-Lecuona A; Bouchara T; Picinali L
Immersive Sound for XR Book Chapter
In: Alcañiz, Jolanda G. Tromp Marco Sacco Mariano (Ed.): Chapter 4, Wiley, 2022, ISBN: 9781119865148.
@inbook{Imm/2022,
title = {Immersive Sound for XR},
author = {Arcadio Reyes-Lecuona and Tifanie Bouchara and Lorenzo Picinali},
editor = {Jolanda G. Tromp Marco Sacco Mariano Alcañiz},
url = {https://onlinelibrary.wiley.com/doi/10.1002/9781119865810.ch4},
doi = {10.1002/9781119865810.ch4},
isbn = {9781119865148},
year = {2022},
date = {2022-07-01},
urldate = {2022-07-01},
publisher = {Wiley},
chapter = {4},
abstract = {Sound plays a very important role in everyday life as well as in XR applications, as it will be explained in this chapter. Recent advances and challenges in immersive audio research are presented, discussing how, why, and to which extent there is potential for further development of these technologies applied to XR. The fundamentals of immersive audio rendering for XR are introduced before presenting the main technological challenges still open in the area. Finally, a series of future applications is presented, which the authors envision being examples of the potential of immersive audio in XR, and a research roadmap is outlined.},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
Sound plays a very important role in everyday life as well as in XR applications, as it will be explained in this chapter. Recent advances and challenges in immersive audio research are presented, discussing how, why, and to which extent there is potential for further development of these technologies applied to XR. The fundamentals of immersive audio rendering for XR are introduced before presenting the main technological challenges still open in the area. Finally, a series of future applications is presented, which the authors envision being examples of the potential of immersive audio in XR, and a research roadmap is outlined.Cuevas-Rodriguez M
3D Binaural Spatialisation for Virtual Reality and Psychoacoustics PhD Thesis
Universidad de Málaga, 2022.
@phdthesis{TesisMaria,
title = {3D Binaural Spatialisation for Virtual Reality and Psychoacoustics},
author = {Maria Cuevas-Rodriguez},
url = {https://hdl.handle.net/10630/25570},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
publisher = {UMA Editorial},
address = {Reyes-Lecuona, A and Molina-Tanco, L.},
school = {Universidad de Málaga},
abstract = {The 3DTI Toolkit-BS is designed to manage complex, dynamic acoustic scenes that change in real time. Smooth transitions for moving sources and/or listener were developed to avoid audible artefacts. The library has been evaluated with a battery of tests demonstrating its good dynamic behaviour and performance. The applications of the 3DTI Toolkit-BS library are not limited to Virtual Reality; the library aims to become a reference tool for the execution of psychoacoustics experiments, as it brings together in a single tool several techniques and functionalities developed and evaluated of spatial audio research in the last 20 years. The 3DTI Toolkit-BS tool was tested in a psychoacoustics experiment, a study on the influence of non-individual HRTF on speech intelligibility. It is known that HRTF signals have an impact on speech intelligibility in a Cocktail Party scenario (where the listener tries to focus attention on a particular acoustic stimulus, filtering out all other stimuli). In the experiment, the Speech Reception Threshold (SRT) was measured, showing significant global and individual differences between the SRTs measured using different HRTFs. These results confirmed that for these Cocktail Party situations, the choice of HRTF should be carefully considered for each individual.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The 3DTI Toolkit-BS is designed to manage complex, dynamic acoustic scenes that change in real time. Smooth transitions for moving sources and/or listener were developed to avoid audible artefacts. The library has been evaluated with a battery of tests demonstrating its good dynamic behaviour and performance. The applications of the 3DTI Toolkit-BS library are not limited to Virtual Reality; the library aims to become a reference tool for the execution of psychoacoustics experiments, as it brings together in a single tool several techniques and functionalities developed and evaluated of spatial audio research in the last 20 years. The 3DTI Toolkit-BS tool was tested in a psychoacoustics experiment, a study on the influence of non-individual HRTF on speech intelligibility. It is known that HRTF signals have an impact on speech intelligibility in a Cocktail Party scenario (where the listener tries to focus attention on a particular acoustic stimulus, filtering out all other stimuli). In the experiment, the Speech Reception Threshold (SRT) was measured, showing significant global and individual differences between the SRTs measured using different HRTFs. These results confirmed that for these Cocktail Party situations, the choice of HRTF should be carefully considered for each individual.Arrebola F; Gonzalez-Toledo D; Garcia-Jimenez P; Molina-Tanco L; Cuevas-Rodriguez M; Reyes-Lecuona A
Simulación en tiempo real de las reflexiones tempranas mediante el método de las imágenes Proceedings Article
In: 53 Congreso Español de Acústica -TECNIACUSTICA 2022., 2022.
@inproceedings{arrebola2022simulacion,
title = {Simulación en tiempo real de las reflexiones tempranas mediante el método de las imágenes},
author = {F. Arrebola and D. Gonzalez-Toledo and P Garcia-Jimenez and L. Molina-Tanco and M. Cuevas-Rodriguez and A. Reyes-Lecuona},
url = {https://documentacion.sea-acustica.es/publicaciones/Elche22/ID-99.pdf},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
booktitle = {53 Congreso Español de Acústica -TECNIACUSTICA 2022.},
abstract = {Se presenta una herramienta de código abierto para simular las reflexiones tempranas de una sala mediante el método de las imágenes. Para ello, el orden de reflexión y la distancia máxima de las fuentes espejo son controlables por el usuario. Asimismo, son configurables la geometría y los perfiles de absorción de las paredes. Se trata de una extensión del 3D Tune-In Toolkit, en la que se ha añadido la simulación del retardo de propagación y la implementación del método de las imágenes. Esta simulación de reverberación se puede completar con otros renderizadores mediante una aproximación hibrida.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Se presenta una herramienta de código abierto para simular las reflexiones tempranas de una sala mediante el método de las imágenes. Para ello, el orden de reflexión y la distancia máxima de las fuentes espejo son controlables por el usuario. Asimismo, son configurables la geometría y los perfiles de absorción de las paredes. Se trata de una extensión del 3D Tune-In Toolkit, en la que se ha añadido la simulación del retardo de propagación y la implementación del método de las imágenes. Esta simulación de reverberación se puede completar con otros renderizadores mediante una aproximación hibrida.Cuevas-Rodríguez M; González-Toledo D; Reyes-Lecuona A; Picinali L
Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment Journal Article
In: The Journal of the Acoustical Society of America, vol. 149, no. 4, pp. 2573–2586, 2021, ISSN: 0001-4966.
@article{Imp/2021,
title = {Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment},
author = {María Cuevas-Rodríguez and Daniel González-Toledo and Arcadio Reyes-Lecuona and Lorenzo Picinali},
url = {https://asa.scitation.org/doi/10.1121/10.0004220},
doi = {10.1121/10.0004220},
issn = {0001-4966},
year = {2021},
date = {2021-04-01},
urldate = {2021-04-01},
journal = {The Journal of the Acoustical Society of America},
volume = {149},
number = {4},
pages = {2573–2586},
abstract = {When performing binaural spatialisation, it is widely accepted that the choice of the head related transfer functions (HRTFs), and in particular the use of individually measured ones, can have an impact on localisation accuracy, externalization, and overall realism. Yet the impact of HRTF choices on speech-in-noise performances in cocktail party-like scenarios has not been investigated in depth. This paper introduces a study where 22 participants were presented with a frontal speech target and two lateral maskers, spatialised using a set of non-individual HRTFs. Speech reception threshold (SRT) was measured for each HRTF. Furthermore, using the SRT predicted by an existing speech perception model, the measured values were compensated in the attempt to remove overall HRTF-specific benefits. Results show significant overall differences among the SRTs measured using different HRTFs, consistently with the results predicted by the model. Individual differences between participants related to their SRT performances using different HRTFs could also be found, but their significance was reduced after the compensation. The implications of these findings are relevant to several research areas related to spatial hearing and speech perception, suggesting that when testing speech-in-noise performances within binaurally rendered virtual environments, the choice of the HRTF for each individual should be carefully considered.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
When performing binaural spatialisation, it is widely accepted that the choice of the head related transfer functions (HRTFs), and in particular the use of individually measured ones, can have an impact on localisation accuracy, externalization, and overall realism. Yet the impact of HRTF choices on speech-in-noise performances in cocktail party-like scenarios has not been investigated in depth. This paper introduces a study where 22 participants were presented with a frontal speech target and two lateral maskers, spatialised using a set of non-individual HRTFs. Speech reception threshold (SRT) was measured for each HRTF. Furthermore, using the SRT predicted by an existing speech perception model, the measured values were compensated in the attempt to remove overall HRTF-specific benefits. Results show significant overall differences among the SRTs measured using different HRTFs, consistently with the results predicted by the model. Individual differences between participants related to their SRT performances using different HRTFs could also be found, but their significance was reduced after the compensation. The implications of these findings are relevant to several research areas related to spatial hearing and speech perception, suggesting that when testing speech-in-noise performances within binaurally rendered virtual environments, the choice of the HRTF for each individual should be carefully considered.Márquez-Moncada A; Luis B H; González-Toledo D; Cuevas-Rodriguez M; Molina-Tanco L; Reyes-Lecuona A
Influencia del Audio 3D en la Percepción de la Ganancia de Rotación en un Entorno Virtual . Un Estudio Piloto Journal Article
In: Interacción 20/21, 2021.
@article{Marquez-Moncada2021a,
title = {Influencia del Audio 3D en la Percepción de la Ganancia de Rotación en un Entorno Virtual . Un Estudio Piloto},
author = {Ana Márquez-Moncada and Bottcher Hauke Luis and Daniel González-Toledo and María Cuevas-Rodriguez and Luis Molina-Tanco and Arcadio Reyes-Lecuona},
url = {https://hdl.handle.net/10630/22942},
year = {2021},
date = {2021-01-01},
journal = {Interacción 20/21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Reyes-Lecuona A; Cuevas-Rodriguez M; Gonzalez-Toledo D; Molina-Tanco L; Picinali L
Speech perception in VR: do we need individual recordings? Proceedings Article
In: pp. 1-1, 2021.
@inproceedings{Reyes-Lecuona2021b,
title = {Speech perception in VR: do we need individual recordings?},
author = {Arcadio Reyes-Lecuona and Maria Cuevas-Rodriguez and Daniel Gonzalez-Toledo and Luis Molina-Tanco and Lorenzo Picinali},
doi = {10.1109/i3da48870.2021.9610938},
year = {2021},
date = {2021-01-01},
journal = {International Conference on Immersive and 3D Audio},
issue = {101017743},
pages = {1-1},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Reyes-Lecuona A; Moncada A M; Bottcher H L; González-Toledo D; Cuevas-Rodr’iguez M; Molina-Tanco L
Audio Binaural y Ganancia de Rotación en Entornos Virtuales Journal Article
In: Revista de la Asociación Interacción Persona Ordenador (AIPO), vol. 2, no. 2, pp. 54–62, 2021.
@article{reyes2021audio,
title = {Audio Binaural y Ganancia de Rotación en Entornos Virtuales},
author = {Arcadio Reyes-Lecuona and Ana Marquez Moncada and Hauke Luis Bottcher and Daniel González-Toledo and Mar'ia Cuevas-Rodr'iguez and Luis Molina-Tanco},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Revista de la Asociación Interacción Persona Ordenador (AIPO)},
volume = {2},
number = {2},
pages = {54--62},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Open Tools and Resources
- 3D Tune-In Toolkit: Repository link
- Image Source Method App: Repository link
- Binaural Rendering Toolbox (BRT): Repository link
Legacy and Future Directions
SAVLab has laid the groundwork for advanced psychoacoustic research tools, integrating auditory models with VR/AR environments. The tools developed under SAVLab will continue to serve the scientific community as benchmarks for spatial audio research and immersive audio applications.
