3D Tune-In: Enhancing Hearing Aid Use with 3D Audio Technology
Partners: Imperial College London (Coordinator), De Montfort University, University of Málaga, University of Nottingham, XTeam Software Solutions, Vianet.
Programme: Horizon 2020
Call: ICT-21-2014 – Technologies for better human learning and teaching
Grant Agreement No.: 644051
Duration: May 2015 – April 2018
Principal Investigator at UMA: Arcadio Reyes-Lecuona
Research team at UMA: Luis Molina-Tanco, María Cuevas-Rodríguez, Daniel González-Toledo
Project Summary
The 3D Tune-In project aimed to enhance the usability and customization of hearing aids by integrating gamification and 3D audio technologies. It developed the 3D Tune-In Toolkit, an open-source library for real-time spatial audio rendering and hearing loss emulation, enabling advanced simulations for research, education, and interactive applications. The project provided innovative tools to empower individuals with hearing impairments, raise awareness among the general public, and support audiologists and manufacturers in improving hearing aid fitting and demonstrations.
Objectives
- Improve the usability and customization of hearing aids for individuals with hearing impairments through innovative tools that leverage spatial audio technologies and binaural rendering.
- Raise awareness about the challenges of hearing loss among non-affected users.
- Provide advanced tools for hearing aid manufacturers and audiologists to enhance fitting and demonstrations through realistic 3D soundscapes and binaural audio processing.
Role of the DIANA Research Group
The DIANA research group at the University of Málaga contributed significantly to the project by:
- Leading the technical development of the 3D Tune-In Toolkit, an open-source library for:
- Real-time spatial audio rendering to create immersive 3D soundscapes.
- Hearing loss simulation for realistic auditory experiences.
- Interactive applications for AR/VR environments and psychoacoustics research.
The toolkit has become a reference for auditory simulations, fostering its use in psychoacoustics, audiology, and virtual environments.
Publications by UMA
Reyes-Lecuona A; Molina-Tanco L; Cuevas-Rodríguez M; González-Toledo D
Interacción 3D y Realidad Virtual en la Universidad de Málaga. Presentación del grupo 3DI-DIANA Journal Article
In: Interaccion – Revista Digital de AIPO, pp. 85–88, 2020.
@article{Reyes-Lecuona2020,
title = {Interacción 3D y Realidad Virtual en la Universidad de Málaga. Presentación del grupo 3DI-DIANA},
author = {Arcadio Reyes-Lecuona and Luis Molina-Tanco and María Cuevas-Rodríguez and Daniel González-Toledo},
year = {2020},
date = {2020-01-01},
journal = {Interaccion - Revista Digital de AIPO},
pages = {85–88},
abstract = {The 3DI-DIANA team is researching and developing technology in 3D interaction and user experience in Interactive Virtual Environments (IVE) since 2004. Their work has focused on 3D interaction techniques for Virtual Reality from a Human-Computer Interaction perspective. Their expertise and interests span binaural 3D audio spatialisation, 3D interaction with reduced degrees of freedom, and Virtual and Augmented Reality, including haptic Virtual Reality and Presence. The team have an important record in collaborative projects, both at national and European level. They are open to collaborations with other research teams, being able to contribute with their expertise in their described topics. More specifically, the 3DI-DIANA team can contribute in extending interactive virtual environments with 3D Audio and developing 3D interaction techniques which improve the user experience of handling and visualization of complex 3D objects.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Reyes-Lecuona A; Cuevas-Rodríguez M; González-Toledo D; del Olmo C G; Rubia-Cuestas E J; Molina-Tanco L; Rodríguez-Rivero Á; Picinali L
23rd International Congress on Acoustics, ICA2019, Aachen, 2019, ISBN: 9783939296157.
@conference{The/2019,
title = {The 3D Tune-In Toolkit: A C++ library for binaural spatialisation, and hearing loss / hearing aids emulation},
author = {Arcadio Reyes-Lecuona and María Cuevas-Rodríguez and Daniel González-Toledo and Carlos Garre del Olmo and Ernesto J. Rubia-Cuestas and Luis Molina-Tanco and Ángel Rodríguez-Rivero and Lorenzo Picinali},
url = {http://pub.dega-akustik.de/ICA2019/data/index.html},
isbn = {9783939296157},
year = {2019},
date = {2019-09-01},
booktitle = {23rd International Congress on Acoustics, ICA2019},
address = {Aachen},
abstract = {This contribution presents the 3D Tune-In (3DTI) Toolkit, an open source C++ library for binaural spatialisation which includes hearing loss and hearing aid emulators. Binaural spatialisation is performed through convolution with user-imported Head Related Transfer Functions (HRTFs) and Binaural Room Impulse Responses (BRIRs), including additional functionalities such as near- and far-field source simulation, customisation of Interaural Time Differences (ITDs), and Ambisonic-based binaural reverberation. Hearing loss is simulated through gammatone filters and multiband expanders/compressors, including advanced non-linear features such as frequency smearing and temporal distortion. A generalised hearing aid simulator is also included, with functionalities such as dynamic equalisation, calibration from user-inputted audiograms, and directional processing.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Picinali L; Cuevas-Rodríguez M; González-Toledo D; Reyes-Lecuona A
23rd International Congress on Acoustics, ICA2019, 2019, ISBN: 9783939296157.
@conference{Spe/2019,
title = {Speech-in-noise performances in virtual cocktail party using different non-individual Head Related Transfer Functions},
author = {Lorenzo Picinali and María Cuevas-Rodríguez and Daniel González-Toledo and Arcadio Reyes-Lecuona},
url = {http://ica2019.org/technical-program/},
isbn = {9783939296157},
year = {2019},
date = {2019-09-01},
booktitle = {23rd International Congress on Acoustics, ICA2019},
pages = {2158–2159},
abstract = {Functions (HRTFs) can have an impact on localisation accuracy and, more in general, realism and sound sources externalisation. The impact of the HRTF choice on speech-in-noise performances in cocktail party scenarios has though not yet been investigated in depth. Within a binaurally-rendered virtual environment, Speech Reception Thresholds (SRTs) with frontal target speaker and lateral noise maskers were measured for 22 subjects several times across different sessions, and using different HRTFs. Results show that for the majority of the tested subjects, significant differences could be found between the SRTs measured using different HRTFs. Furthermore, the HRTFs leading to better or worse SRT performances were not the same across the subjects, indicating that the choice of the HRTF can indeed have an impact on speech-in-noise performances within the tested conditions.
These results suggest that when testing speech-in-noise performances within binaurally-rendered virtual environments, the choice of the HRTF should be carefully considered. Furthermore a recommendation should be made for future modelling of the speech-in-noise perception mechanisms to include monoaural spectral cues in addition to interaural differences.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
These results suggest that when testing speech-in-noise performances within binaurally-rendered virtual environments, the choice of the HRTF should be carefully considered. Furthermore a recommendation should be made for future modelling of the speech-in-noise perception mechanisms to include monoaural spectral cues in addition to interaural differences.
Picinali L; Hrafnkelsson R; Reyes-Lecuona A
The 3D Tune-In Toolkit VST Binaural Audio Plugin Conference
Audio Engineering Society Conference: 2019 AES International Conference on Immersive and Interactive Audio, Audio Engineering Society 2019.
@conference{picinali20193d,
title = {The 3D Tune-In Toolkit VST Binaural Audio Plugin},
author = {Lorenzo Picinali and Ragnar Hrafnkelsson and Arcadio Reyes-Lecuona},
year = {2019},
date = {2019-01-01},
booktitle = {Audio Engineering Society Conference: 2019 AES International Conference on Immersive and Interactive Audio},
organization = {Audio Engineering Society},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Cuevas-Rodríguez M; Picinali L; González-Toledo D; del Olmo C G; Rubia-Cuestas E J; Molina-Tanco L; Reyes-Lecuona A
3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation Journal Article
In: PLOS ONE, vol. 14, no. 3, pp. 1–37, 2019.
@article{10.1371/journal.pone.0211899,
title = {3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation},
author = {María Cuevas-Rodríguez and Lorenzo Picinali and Daniel González-Toledo and Carlos Garre del Olmo and Ernesto J. Rubia-Cuestas and Luis Molina-Tanco and Arcadio Reyes-Lecuona},
url = {https://doi.org/10.1371/journal.pone.0211899},
doi = {10.1371/journal.pone.0211899},
year = {2019},
date = {2019-01-01},
journal = {PLOS ONE},
volume = {14},
number = {3},
pages = {1–37},
publisher = {Public Library of Science},
abstract = {The 3D Tune-In Toolkit (3DTI Toolkit) is an open-source standard C++ library which includes a binaural spatialiser. This paper presents the technical details of this renderer, outlining its architecture and describing the processes implemented in each of its components. In order to put this description into context, the basic concepts behind binaural spatialisation are reviewed through a chronology of research milestones in the field in the last 40 years. The 3DTI Toolkit renders the anechoic signal path by convolving sound sources with Head Related Impulse Responses (HRIRs), obtained by interpolating those extracted from a set that can be loaded from any file in a standard audio format. Interaural time differences are managed separately, in order to be able to customise the rendering according the head size of the listener, and to reduce comb-filtering when interpolating between different HRIRs. In addition, geometrical and frequency-dependent corrections for simulating near-field sources are included. Reverberation is computed separately using a virtual loudspeakers Ambisonic approach and convolution with Binaural Room Impulse Responses (BRIRs). In all these processes, special care has been put in avoiding audible artefacts produced by changes in gains and audio filters due to the movements of sources and of the listener. The 3DTI Toolkit performance, as well as some other relevant metrics such as non-linear distortion, are assessed and presented, followed by a comparison between the features offered by the 3DTI Toolkit and those found in other currently available open- and closed-source binaural renderers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cuevas-Rodríguez M; Picinali L; González-Toledo D; del Olmo C G; Rubia-Cuestas E J; Molina-Tanco L; Poirier-Quinot D; Reyes-Lecuona A
The 3D Tune-In Toolkit – 3D audio spatialiser, hearing loss and hearing aid simulations Conference
IEEE VR Workshop on Sonic Interactions for Virtual Environments (SIVE2018), 2018.
@conference{Mar/The/2018,
title = {The 3D Tune-In Toolkit - 3D audio spatialiser, hearing loss and hearing aid simulations},
author = {María Cuevas-Rodríguez and Lorenzo Picinali and Daniel González-Toledo and Carlos Garre del Olmo and Ernesto J. Rubia-Cuestas and Luis Molina-Tanco and David Poirier-Quinot and Arcadio Reyes-Lecuona},
url = {https://sive.create.aau.dk/},
year = {2018},
date = {2018-01-01},
booktitle = {IEEE VR Workshop on Sonic Interactions for Virtual Environments (SIVE2018)},
abstract = {The 3D Tune-In (3DTI) project (http://www.3d-tune-in.eu) aims at using 3D sound, visuals and games to support people using hearing aid devices and educate others about hearing loss. Within the project the 3DTI Toolkit, a standard C++ library for audio spatialisation and simulation using speakers or headphones, has been developed. The toolkit allows development of highly realistic and immersive 3D audio simulations (both loudspeaker- and headphones-based), and simulation (within the virtual environment) of hearing aid devices and of different typologies of hearing loss. The 3DTI Toolkit can be, and has been, used to develop applications and games relating to hearing aid and hearing loss.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Cuevas-Rodríguez M; González-Toledo D; Rubia-Cuestas E J; del Olmo C G; Molina-Tanco L; Reyes-Lecuona A; Poirier-Quinot D; Picinali L
An Open-Source Audio Renderer for 3D Audio with Hearing Loss and hearing Aid Simulations Conference
Proceedings of AES Conference 2017, AES 2017.
@conference{AES2017,
title = {An Open-Source Audio Renderer for 3D Audio with Hearing Loss and hearing Aid Simulations},
author = {María Cuevas-Rodríguez and Daniel González-Toledo and Ernesto J. Rubia-Cuestas and Carlos Garre del Olmo and Luis Molina-Tanco and Arcadio Reyes-Lecuona and David Poirier-Quinot and Lorenzo Picinali},
url = {http://www.aes.org/e-lib/browse.cfm?elib=18650},
year = {2017},
date = {2017-01-01},
urldate = {2017-01-01},
booktitle = {Proceedings of AES Conference 2017},
organization = {AES},
abstract = {The EU-funded 3D Tune-In (http://www.3d-tune-in.eu/) project introduces an innovative approach using 3D sound, visuals, and gamification techniques to support people using hearing aid devices. In order to achieve a high level of realism and immersiveness within the 3D audio simulations, and to allow for the emulation (within the virtual environment) of hearing aid devices and of different typologies of hearing loss, a custom open-source C++ library (the 3D Tune-In Toolkit) has been developed. The 3DTI Toolkit integrates several novel functionalities for speaker and headphone-based sound spatialization, together with generalized hearing aid and hearing loss simulators. A first version of the 3DTI Toolkit will be released with a non-commercial open-source license in Spring 2017.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Resources