[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Postdoc or PhD Position at TU Berlin


The succesful candidate will develop and apply signal processing and machine
learning techniques in order to detect and annotate acoustic events in an
auditory scene analysis and an quality of experience setting. Project and
position are part of an international collaborative project which is funded
through the EU FET Open scheme (see brief description below).

Starting date: Immediate

Salary level: E-13 TV-L

The position is for a maximum of three years.

Candidates should hold a recent PhD-degree (Postdoc position) or Diplom-/Master-
degree (PhD position), should have excellent programming skills, and should
have good knowledge in the machine learning field. Candidates with research
experience in Machine Learning or applications to auditory processing will be

Application material (CV, list of publications, abstract of PhD thesis (if
applicable), abstract of Diplom-/Master/Thesis, copies of certificates and two
letters of reference) should be sent to:

Prof. Dr. Klaus Obermayer
MAR 5-6, Technische Universitaet Berlin, Marchstrasse 23
10587 Berlin, Germany
email: oby@xxxxxxxxxxxxxxx

preferably by email.

All applications received before January 26th, 2014, will be given full
consideration, but applications will be accepted until the position is

TUB seeks to increase the proportion of women and particularly
encourages women to apply. Women will be preferred given equal

Disabled persons will be preferred given equal qualification.


Consortium Summary:

TWO!EARS replaces current thinking about auditory modelling by a systemic
approach in which human listeners are regarded as multi-modal agents that
develop their concept of the world by exploratory interaction. The goal of
the project is to develop an intelligent, active computational model o
auditory perception and experience in a multi-modal context. Our novel
approach is based on a structural link from binaural perception to judgment
and action, realised by interleaved signal-driven (bottom-up) and
hypothesis-driven (top-down) processing within an innovative expert system
architecture. The system achieves object formation based on Gestalt principles,
meaning assignment, knowledge acquisition and representation, learning,
logic-based reasoning and reference-based judgment. More specifically, the
system assigns meaning to acoustic events by combining signal- and symbol-based
processing in a joint model structure, integrated with proprioceptive and
visual percepts. It is therefore able to describe an acoustic scene in much
the same way that a human listener can, in terms of the sensations that sounds
evoke (e.g. loudness, timbre, spatial extent) and their semantics (e.g. whether
the sound is unexpected or a familiar voice). Our system will be
implemented on a robotic platform, which will actively parse its physical
environment, orientate itself and move its sensors in a humanoid manner.
The system has an open architecture, so that it can easily be modified or
extended. This is crucial, since the cognitive functions to be modelled are
domain and application specific. TWO!EARS will have significant impact on
future development of ICT wherever knowledge and control of aural
experience is relevant. It will also benefit research in related areas such
as biology, medicine and sensory and cognitive psychology.