Sorry, item "offcanvas-col1" does not exist.

Sorry, item "offcanvas-col2" does not exist.

Sorry, item "offcanvas-col3" does not exist.

Sorry, item "offcanvas-col4" does not exist.

en
Have any Questions? +01 123 444 555

TASCAR - Toolbox for Acoustic Scene Creation and Rendering

TASCAR (Toolbox for Acoustic Scene Creation and Rendering) is a research toolkit for rendering virtual acoustic environments. TASCAR is developed in collaboration with the University of Oldenburg. It is the central rendering software for the Gesture Lab and the 3D Virtual Reality Lab at UOL. HTCH and UOL are working closely together in the areas of virtual reality reproduction and behavioral interaction.

All info can be found on the related page http://tascar.org/

Installation information can be found here: install.tascar.org

Be sure to check out the interactive examples as well: https://www.youtube.com/channel/UCAXZPzxbOJM9CM0IBfgvoNg

CAR is available under the GNU GPL v2.

Literature
  • Grimm, Giso; Luberadzka, Joanna; Hohmann, Volker. A Toolbox for Rendering Virtual Acoustic Environments in the Context of Audiology.Acta Acustica united with Acustica, Volume 105, Number 3, May/June 2019, pp. 566-578(13), doi:10.3813/AAA.919337
  • G. Grimm, J. Luberadzka, T. Herzke, and V. Hohmann. Toolbox for acoustic scene creation and rendering (tascar): Render methods and research applications. In F. Neumann, editor, Proceedings of the Linux Audio Conference, Mainz, Germany, 2015. Johannes-Gutenberg Universitat Mainz.
  • G. Grimm, B. Kollmeier, and V. Hohmann. Spatial acoustic scenarios in multi-channel loudspeaker systems for hearing aid evaluation. Journal of the American Academy of Audiology, 27(7):557–566, 2016.
  • G. Grimm, J. Luberadzka, and V. Hohmann. Virtual acoustic environments for comprehensive evaluation of model based hearing devices. International Journal of Audiology, 2016.
Technology

Moving sound sources are simulated in a physically correct way; air absorption and Doppler effect are taken into account. A mirror sound source model with simple reflection parameters allows static and dynamic moving reflectors. The simulation takes place in real time in the time domain, allowing interactive positioning of objects, e.g. control of listener position via body motion trackers. The output signal can be played back in Higher-Order Ambisonics (HOA), Vector-Base Amplitude Panning (VBAP) or binaurally via headphones. Generic or custom HRTFs can be used for binaural output.

An interface for synchronization with video sources or interactive computer graphics is available via Open Sound Control (OSC). In principle, there are possibilities to interface with the Blender Game Engine or the Unreal Game Engine.

Not found the right thing?
Our complete offer
We are also happy to help you personally!
Copyright 2024.
Settings saved

Privacy settings

We (Hörzentrum Oldenburg gGmbH) and our partners use cookies to deliver, maitain and continously improve our website for you. Please give your consent for using cookies, as described in our cookie notice, by clicking on “Accept all and continue” to have the best user experience on our website.

You are using an outdated browser. The website may not be displayed correctly. Close