[Apologies for cross-postings]
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
ASVspoof 2017 CHALLENGE:
Audio replay detection for automatic speaker verification anti-spoofing
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
Are you good at machine learning for audio signals? Are you good at discriminating 'fake' signals from authentic ones? Are you looking for new audio processing challenges? Do you work in the domain of speaker recognition?
ASVspoof 2017 challenge might be for you!
CHALLENGE TASK:
Given a short clip of speech audio, determine whether it contains
a GENUINE human voice (live recording), or a REPLAY recording (fake).
You will be provided a development set containing genuine/replay labeled audio examples, along with further metadata such as speech content and the devices used in the replay recordings. Your task is to develop a system that assigns a
single 'liveness' or 'genuineness' score value to new audio samples and to execute that system on a set of test files for which the ground truth is not provided. We provide a Matlab-based reference baseline method to kick-off quickly towards developing your new ideas!
For more details, refer to the evaluation plan on the website:
BACKGROUND:
The goal of the challenge series is to enhance the security of automatic speaker verification (ASV) systems from being intentionally circumvented using fake recordings, also known as 'spoofing attacks' or 'representation attacks' in the context of biometrics. ASVspoof 2017 is the second edition of a challenge kicked off in 2015, and the new perspective in ASVspoof 2017 are the replay attacks, especially 'unseen' attacks - for instance, containing replay environments, devices, and speakers that might be very different from those in the development data.
Despite 'ASV' being in the challenge title, you do NOT require knowledge of automatic speaker verification: the task is a 'standalone' replay audio detection task that can be addressed as a generic acoustic pattern classification problem. We welcome as many new ideas to the problem as possible!
SCHEDULE:
Development data published: December 23, 2016
Evaluation data published: February 10, 2017
Evaluation set scores due: February 24, 2017
Results available: March 3, 2017
Interspeech paper deadline: March 14, 2017
Metadata/keys published: May 2017
Interspeech special session: August 2017
REGISTRATION:
to register and obtain the dev data.
ORGANIZERS:
Tomi Kinnunen, University of Eastern Finland, FINLAND
Nicholas Evans, Eurecom, FRANCE
Junichi Yamagishi, University of Edinburgh, UK
Kong Aik Lee, Institute for Infocomm Research, SINGAPORE
Md Sahidullah, University of Eastern Finland, FINLAND
Massimiliano Todisco, Eurecom, FRANCE
Hector Delgado, Eurecom, FRANCE
CONTACT: