Audio editing (Lorenzo Picinali )


Subject: Audio editing
From:    Lorenzo Picinali  <LPicinali@xxxxxxxx>
Date:    Wed, 19 Dec 2012 13:53:21 +0000
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

Dear Abi, in terms of normalization you need to be rather careful: if you normalize every sample (every vowel/consonant sound) individually, you'll likely end up increasing the amplitude of each of them in a different way, therefore having problems when calibrating their levels. So, if you have all the recordings in one single file, you can just normalize it all together to 0dBFS, and it should anyway result in a general amplitude increase. If not, you can just analyse all recordings, check which is the one that has the max amplitude peak, and increase the level of all files until that peak gets to 0 dBFS...but this is not always needed... You can do this with Adobe Audition, but I've found that using SoundHack (Tom Erbe's software...you can get it for free from the Internet) is very fast, precise and simple. Regarding compression, and in general the amplitude of the signals, you need to establish at which level you want to perform the calibration: generally, speech recordings used for vocal audiometry come all with a sinusoidal signal that has the same level of the RMS of the various samples, therefore you can use that for calibrating the playback system. This signal can have various levels, depending on the set you are using...the sets I have used are generally calibrated between -10 and -14 dBFS unweighted. So, you'll need to decide at which level to calibrate yours, and according to this you'll see if you need or not compression or limiting. In order to avoid clipping, a limiter with 10ms attack time set with a threshold of -0.5 dBFS would do...obviously, if you are increasing the level of the signal too much before the actual limiter, you'll avoid clipping, but have other types of problems/artifacts due to the dynamic processing. I hope this helps! Yours Lorenzo -- Dr. Lorenzo Picinali Senior Lecturer in Music/Audio Technology Faculty of Technology De Montfort University, The Gateway Leicester, LE1 9BH lpicinali@xxxxxxxx Tel 0116 207 8051 Date: Tue, 18 Dec 2012 12:05:16 +1300 From: Abin Kuruvilla Mathew <amat527@xxxxxxxx> Subject: Audio editing --20cf3022399921083104d1146a89 Content-Type: text/plain; charset=ISO-8859-1 Dear All, I have a set of audio files (consonants and vowels) to be editied in Adobe audition and was wondering to what extent and how much of Normalization (RMS) and dynamic compression (if necessary) would be needed so that the naturalness is preserved and clipping doesn't occur. kind regards, Abin -- Abin K. Mathew Doctoral student Department of Psychology (Speech Science) Tamaki Campus, 261 Morrin Road, Glen Innes The University of Auckland Private Bag 92019 Auckland- 1142 New Zealand Email: amat527@xxxxxxxx


This message came from the mail archive
/var/www/postings/2012/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University