Subject: IBM's Superhuman Speech From: Yadong Wang <ydwang(at)ele.uri.edu> Date: Mon, 4 Nov 2002 21:05:16 -0500This is a multi-part message in MIME format. ------=_NextPart_000_0002_01C28445.DFF37740 Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: 8bit You just don't understand! IBM's Superhuman Speech initiative clears conversational confusion. by Sam Howard-Spink It's a vision of the future that's been promised for decades — humans and machines interacting with each other by voice. Yet despite over 50 years of research, today's speech recognition systems are not nearly accurate enough for widespread use. They work when customized to recognize a trained individual's voice, such as a secretary accustomed to dictation, or limited data sets such as telephone numbers or medical vocabulary. A real-time, natural-language recognizer that can cope with all types of speech remains an elusive goal. IBM competes fiercely to improve automatic speech recognition (ASR). The company has focused on speech technologies for more than 30 years and is currently embarking on an eight-year mission to develop a Superhuman Speech system (SHS) — a recognizer that actually performs better than humans. The SHS initiative is an umbrella research effort encompassing all of IBM's speech recognition projects and product tracks. There are roughly 100 IBM speech researchers worldwide. Teams work on such problems as machine comprehension or "natural-language understanding," embedded ASR for cars and mobile devices, and transcription tools for specific industries. All of these projects require increasingly accurate recognition capabilities, and SHS is leading the effort on their behalf. David Nahamoo, department group manager of human language technologies at IBM Research, joined the company's speech recognition team from Purdue University in 1982. "My professors at the time told me that I should forget all about speech recognition, since I wouldn't get anywhere with it in my lifetime," he says. "But today we have some very good recognition technologies. With the Superhuman project we'll be dealing with a lot of issues we haven't handled in the past. These include accents, high noise environments, all kinds of variability in the delivery channel, the mood of the speaker, the spontaneity of the speech, and other variables. Depending on the task, we're still a factor of three to a factor of 10 behind human performance. SHS aims to close that gap." Better than human Following early pioneering work at MIT, Carnegie Mellon, and other universities, speech research began in earnest at IBM in 1970. The breakthrough came when researchers applied the principles of statistical pattern recognition to the problem of ASR. They constructed a system that could learn the likelihood of word sequences simply by crunching vast quantities of data. Once researchers figured out how to convert linguistics into mathematics, speech recognition moved from theory to practice. Speech research at IBM has come full circle in the past 30 years, says Nahamoo. Today there are four speech recognition product tracks at IBM: dictation for professionals and consumers (ViaVoice™); embedded recognition for devices such as PDAs and automotive applications; telephony (WebSphere™ Voice Server); and transcription tools for business and professional use in medical, legal, and other fields (WebSphere Transcription Server). In 2000 IBM began providing voice services as part of its e-business infrastructure offerings. Rather than introducing new products, the SHS initiative will improve upon IBM's existing speech products by substantially reducing error rates. The project aims to develop technology that will meet two objectives. First, researchers hope that SHS will eliminate almost all need for customization so a speech-recognition package can be used by anyone in any circumstances. The other major goal is to get the systems to perform as well as or better than humans. At that point, the economic benefits of the technology are expected to dictate wider deployment and drive what could be a $30 billion — $50 billion-a-year market. "We have reached a point where we've closed the loop from innovation to product delivery," says Nahamoo. "Now we're reexamining all the necessary components to take this technology to the next level. We're really concentrating on matching and even exceeding human performance. That is what Superhuman Speech is about." Hearing above the din The proposal to develop a speech recognizer that is "better than human" seems far-fetched at first, until one remembers that humans also make errors in recognizing speech. Current technologies have error rates of around one in 20 words — nearly 10 times worse than human performance. The SHS project is not seeking to perfect speech recognition, but rather to refine it to easily tolerable levels. One of the challenges facing researchers in speech technology today is achieving accurate recognition in noisy environments. Today's recognizers do a good job when tied to a trained individual's voice, but ambient and sudden sounds in the background can produce frequent errors. Reducing these noise-related errors is one of SHS's primary goals. Another objective is to construct a universal recognizer that can handle different input systems. These might include a telephone for navigating through call center options; a desktop microphone for dictation and transcription; or a handheld device such as a cell phone or PDA to allow for the input of data without using tiny buttons. Each input mechanism currently requires a new customization for an existing recognizer, but the ideal is to have one recognizer that can handle input from multiple sources. Researchers face the additional challenge of building a system that is truly domain-independent. Speech recognizers can currently handle specific data such as numbers or a limited, specialized vocabulary — for example, booking a flight over the telephone-but the goal is to have a system that can accommodate unfamiliar data. Talking history To achieve these goals, IBM's researchers need an abundance of language data to crunch. They've found a rich source in the exhaustive MALACH Project, one of several activities under way in the SHS initiative. IBM and a consortium of academic and industrial researchers, including Johns Hopkins University, the University of Maryland, and Steven Spielberg's Visual History Foundation, have received a National Science Foundation grant to transcribe a database containing over 100,000 hours of interviews and conversations with Holocaust survivors. In addition to offering tremendous social and historical value, the recordings provide some of the most linguistically challenging speech available. The testimony is in 32 different languages; it's heavily accented with frequent hesitation and language switching; and it's imbued with emotion. All of these factors make automatic recognition of the recordings in the MALACH (the Hebrew word for "messenger") database a unique challenge. "A lot of spoken data has been collected in terms of oral history and news broadcasts, and people want to extract information from it," says Michael Picheny, head of the Superhuman Speech group. "The magnitude of the MALACH database makes it literally and practically impossible to transcribe, unless you can automate the process. If we achieve any breakthroughs it will be valuable not only for this particular data, but the techniques we develop will also be applicable to any other type of recorded material." So how will accurate and ubiquitous speech recognition affect our world? According to SHS researchers, innovations in this area will likely accelerate commerce and information access, especially through wireless and remote devices. Audio and video broadcasts will become more like print; Just as newspapers are produced in quantity and read when their buyers have time, speech recognition multimedia information can be stored and retrieved when convenient, even if one's hands are busy. For instance, driving directions could be stored on cell phones that use global positioning services to track where you are and when you need the directions. Superhuman Speech even has the potential to narrow the digital divide. People around the world will be able to interact with technology more easily and access more information than ever before, regardless of literacy and skill levels. Combined with the potential for real-time speech translators, which are already being developed, language barriers between cultures could become a thing of the past. Having a stimulating conversation with a machine may still be far in the future, but before the decade is over, your computer should make a great listener. Yadong ------------------------------------------------ There is only one life, and I want to live it successfully. ------=_NextPart_000_0002_01C28445.DFF37740 Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=3DContent-Type content=3D"text/html; = charset=3Dwindows-1252"> <META content=3D"MSHTML 6.00.2800.1106" name=3DGENERATOR></HEAD> <BODY> <DIV><FONT face=3D宋体 size=3D2><FONT face=3D"Times New = Roman" size=3D5>You just don't=20 understand! <!-- END TITLE --><BR><BR></FONT><FONT class=3Dbodybold = size=3D+0><!-- START TEASER --><FONT face=3D"Times New Roman">IBM's = Superhuman=20 Speech initiative clears conversational confusion.<BR><BR><I>by Sam=20 Howard-Spink</I> <!-- END TEASER --></FONT></FONT><FONT class=3Dbody = size=3D+0><!-- START BODY --> <P>It's a vision of the future that's been promised for decades =97 = humans and=20 machines interacting with each other by voice. Yet despite over 50 years = of=20 research, today's speech recognition systems are not nearly accurate = enough for=20 widespread use. They work when customized to recognize a trained = individual's=20 voice, such as a secretary accustomed to dictation, or limited data sets = such as=20 telephone numbers or medical vocabulary. A real-time, natural-language=20 recognizer that can cope with all types of speech remains an elusive = goal.</P> <P>IBM competes fiercely to improve automatic speech recognition (ASR). = The=20 company has focused on speech technologies for more than 30 years and is = currently embarking on an eight-year mission to develop a Superhuman = Speech=20 system (SHS) =97 a recognizer that actually performs better than = humans.</P> <P>The SHS initiative is an umbrella research effort encompassing all of = IBM's=20 speech recognition projects and product tracks. There are roughly 100 = IBM speech=20 researchers worldwide. Teams work on such problems as machine = comprehension or=20 "natural-language understanding," embedded ASR for cars and mobile = devices, and=20 transcription tools for specific industries. All of these projects = require=20 increasingly accurate recognition capabilities, and SHS is leading the = effort on=20 their behalf.</P> <P>David Nahamoo, department group manager of human language = technologies at IBM=20 Research, joined the company's speech recognition team from Purdue = University in=20 1982. "My professors at the time told me that I should forget all about = speech=20 recognition, since I wouldn't get anywhere with it in my lifetime," he = says.</P> <P>"But today we have some very good recognition technologies. With the=20 Superhuman project we'll be dealing with a lot of issues we haven't = handled in=20 the past. These include accents, high noise environments, all kinds of=20 variability in the delivery channel, the mood of the speaker, the = spontaneity of=20 the speech, and other variables. Depending on the task, we're still a = factor of=20 three to a factor of 10 behind human performance. SHS aims to close that = gap."</P> <P><B>Better than human</B><BR>Following early pioneering work at MIT, = Carnegie=20 Mellon, and other universities, speech research began in earnest at IBM = in 1970.=20 The breakthrough came when researchers applied the principles of = statistical=20 pattern recognition to the problem of ASR. They constructed a system = that could=20 learn the likelihood of word sequences simply by crunching vast = quantities of=20 data. Once researchers figured out how to convert linguistics into = mathematics,=20 speech recognition moved from theory to practice.</P> <P>Speech research at IBM has come full circle in the past 30 years, = says=20 Nahamoo. Today there are four speech recognition product tracks at IBM:=20 dictation for professionals and consumers (ViaVoice=99); embedded = recognition for=20 devices such as PDAs and automotive applications; telephony = (WebSphere=99 Voice=20 Server); and transcription tools for business and professional use in = medical,=20 legal, and other fields (WebSphere Transcription Server). In 2000 IBM = began=20 providing voice services as part of its e-business infrastructure = offerings.</P> <P>Rather than introducing new products, the SHS initiative will improve = upon=20 IBM's existing speech products by substantially reducing error rates. = The=20 project aims to develop technology that will meet two objectives. First, = researchers hope that SHS will eliminate almost all need for = customization so a=20 speech-recognition package can be used by anyone in any circumstances. = The other=20 major goal is to get the systems to perform as well as or better than = humans. At=20 that point, the economic benefits of the technology are expected to = dictate=20 wider deployment and drive what could be a $30 billion =97 $50 = billion-a-year=20 market.</P> <P>"We have reached a point where we've closed the loop from innovation = to=20 product delivery," says Nahamoo. "Now we're reexamining all the = necessary=20 components to take this technology to the next level. We're really = concentrating=20 on matching and even exceeding human performance. That is what = Superhuman Speech=20 is about."</P> <P><B>Hearing above the din</B><BR>The proposal to develop a speech = recognizer=20 that is "better than human" seems far-fetched at first, until one = remembers that=20 humans also make errors in recognizing speech. Current technologies have = error=20 rates of around one in 20 words =97 nearly 10 times worse than human = performance.=20 The SHS project is not seeking to perfect speech recognition, but rather = to=20 refine it to easily tolerable levels.</P> <P>One of the challenges facing researchers in speech technology today = is=20 achieving accurate recognition in noisy environments. Today's = recognizers do a=20 good job when tied to a trained individual's voice, but ambient and = sudden=20 sounds in the background can produce frequent errors. Reducing these=20 noise-related errors is one of SHS's primary goals.</P> <P>Another objective is to construct a universal recognizer that can = handle=20 different input systems. These might include a telephone for navigating = through=20 call center options; a desktop microphone for dictation and = transcription; or a=20 handheld device such as a cell phone or PDA to allow for the input of = data=20 without using tiny buttons. Each input mechanism currently requires a = new=20 customization for an existing recognizer, but the ideal is to have one=20 recognizer that can handle input from multiple sources.</P> <P>Researchers face the additional challenge of building a system that = is truly=20 domain-independent. Speech recognizers can currently handle specific = data such=20 as numbers or a limited, specialized vocabulary =97 for example, booking = a flight=20 over the telephone-but the goal is to have a system that can accommodate = unfamiliar data.</P> <P><B>Talking history</B><BR>To achieve these goals, IBM's researchers = need an=20 abundance of language data to crunch. They've found a rich source in the = exhaustive MALACH Project, one of several activities under way in the = SHS=20 initiative. IBM and a consortium of academic and industrial researchers, = including Johns Hopkins University, the University of Maryland, and = Steven=20 Spielberg's Visual History Foundation, have received a National Science=20 Foundation grant to transcribe a database containing over 100,000 hours = of=20 interviews and conversations with Holocaust survivors. In addition to = offering=20 tremendous social and historical value, the recordings provide some of = the most=20 linguistically challenging speech available. The testimony is in 32 = different=20 languages; it's heavily accented with frequent hesitation and language=20 switching; and it's imbued with emotion. All of these factors make = automatic=20 recognition of the recordings in the MALACH (the Hebrew word for = "messenger")=20 database a unique challenge.</P> <P>"A lot of spoken data has been collected in terms of oral history and = news=20 broadcasts, and people want to extract information from it," says = Michael=20 Picheny, head of the Superhuman Speech group. "The magnitude of the = MALACH=20 database makes it literally and practically impossible to transcribe, = unless you=20 can automate the process. If we achieve any breakthroughs it will be = valuable=20 not only for this particular data, but the techniques we develop will = also be=20 applicable to any other type of recorded material."</P> <P>So how will accurate and ubiquitous speech recognition affect our = world?=20 According to SHS researchers, innovations in this area will likely = accelerate=20 commerce and information access, especially through wireless and remote = devices.=20 Audio and video broadcasts will become more like print; Just as = newspapers are=20 produced in quantity and read when their buyers have time, speech = recognition=20 multimedia information can be stored and retrieved when convenient, even = if=20 one's hands are busy. For instance, driving directions could be stored = on cell=20 phones that use global positioning services to track where you are and = when you=20 need the directions.</P> <P>Superhuman Speech even has the potential to narrow the digital = divide. People=20 around the world will be able to interact with technology more easily = and access=20 more information than ever before, regardless of literacy and skill = levels.=20 Combined with the potential for real-time speech translators, which are = already=20 being developed, language barriers between cultures could become a thing = of the=20 past.</P> <P>Having a stimulating conversation with a machine may still be far in = the=20 future, but before the decade is over, your computer should make a great = listener.</P><!-- END BODY --></FONT></FONT></DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2></FONT> </DIV> <DIV><FONT face=3D宋体 size=3D2>Yadong</FONT></DIV> <DIV><FONT face=3D宋体=20 size=3D2>------------------------------------------------</FONT></DIV> <DIV>There is only one life, and I want to live it successfully.</DIV> <DIV> </DIV></BODY></HTML> ------=_NextPart_000_0002_01C28445.DFF37740--