TWO!EARS
Primary tabs
Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the Two!Ears project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result should be
A computational framework for modelling active exploratory listening
that assigns meaning to auditory scenes