Skip to main content

Web Content Display Web Content Display

Skip banner

Web Content Display Web Content Display

Web Content Display Web Content Display

You can find us here:

Web Content Display Web Content Display

Prof. Michał Wierzchoń gave a talk entitled "Listening to objects and shapes. On developing sensory substitution with the Colorophone device." - Human Sciences Colloquia

Prof. Michał Wierzchoń gave a talk entitled "Listening to objects and shapes. On developing sensory substitution with the Colorophone device." - Human Sciences Colloquia

Abstract: Sensory substitution (SS) occurs when information taken from one sensory modality (e.g. vision) is translated into another one (e.g. audition) thanks to sensory substitution devices (SSDs). SSDs are usually designed to aid visually impaired. Interestingly, they may also be used in experiments with sighted participants when neural or behavioural consequences of SS are investigated. SSDs could be used for multiple purposes, e.g. to recognise objects and its shapes, represent colours or navigate in space. Here, I present results of a longitudinal, 3-months-long study with sighted, blindfolded participants. We have applied a new, visual-to-auditory SSD developed at the NTNU: the Colorophone. The SSD training has been realised by utilizing a newly developed VR-base auditory environment. The progress of training has been examined in the laboratory conditions with a staircased object detection and orientation identification tasks. We have shown that the device, although designed to substitute colour information, enables the recognition of other simple characteristics of visual objects, such as their shape or orientation. Our experiment demonstrates that participants can learn simple visual objects characteristics with SSD. We discuss the phenomenal characteristics of the Colorophone induced SS as well as the role of the enactive interaction with the environment in SS experience formation. I will also show the initial results of the abovementioned training measured with task-based fMRI with relevant auditory detection task.