I am assisting with the design of an experiment investigating an individual’s ability to monitor and detect errors in their own speech. In particular we are interested in the part an external monitor (i.e. listening to one’s own self-generated speech) may play in error detection.
To accomplish this, we are working on a masking protocol to block participants’ ability to hear themselves speak and prevent them from utilizing their external channel. Currently, we plan to use either White or Pink noise at an individually determined dB level presented through active noise cancelling headphones to both mask/eliminate the participant’s speech in a naming task.
One concern we have is whether or not this sort of procedure will address information a speaker receives from bone conduction. In particular we have two questions:
1. Assuming the air channel is successfully masked, how much acoustic information can individual gather about a lexical item from bone conduction? In short, we’re trying to figure out how much phonological information someone can gather from this type of information.
2. On the topic of practical implementation: In masking air conducted speech have we effectively masked bone conducted speech as well, or does blocking out information from bone conduction require a unique approach?
Any information or recommendations of references is greatly appreciated!
Language & Learning Lab
Moss Rehabilitation Research Institute
This message is intended for the use of the person or entity to which it is addressed and may contain information that is privileged and confidential, the disclosure of which is governed by applicable law. If you are not the intended recipient, or the employee or agent responsible to deliver it to the intended recipient, you are hereby notified that any disclosure, copying, or distribution of this information is strictly prohibited. If you have received this message by error, please notify the sender immediately to arrange for return or destruction of these documents.