[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Announcing the GrFNN Toolbox, Version 1.2.1



Dear Colleagues,

We are pleased to announce the release of the Gradient Frequency Neural Network (GrFNN – pronounced “Griffin”) Toolbox. GrFNNs are being used to simulate active cochlear responses, auditory brainstem physiology, auditory cortical physiology, pitch perception, tonality perception, dynamic attending, and rhythm perception. Three models are currently available: 1) a canonical nonlinear cochlear model, 2) a dynamical auditory brainstem model, and 3) a cortical model of pulse and meter perception in complex rhythms.

The GrFNN Toolbox is a suite of Matlab programs for simulating and analyzing signal processing, plasticity, and pattern formation in the auditory system. GrFNNs model auditory processing using networks of tonotopically tuned oscillatory dynamical systems [1], [2], and are capable of supervised and unsupervised learning via Hebbian plasticity. This approach provides a new framework for nonlinear time-frequency analysis based on a realistic account of auditory processes.

The GrFNN Toolbox is available at: http://musicdynamicslab.uconn.edu/home/multimedia/grfnn-toolbox/

Development was supported in part by funding from the National Science Foundation (BCS-1027761) and the Air Force Office of Scientific Research (FA9550-12-10388).

References:
[1]       E. W. Large, F. V. Almonte, and M. J. Velasco, “A canonical model for gradient frequency neural networks,” Phys. Nonlinear Phenom., vol. 239, pp. 905–911, 2010.
[2]      J. C. Kim and E. W. Large, “Signal processing in periodically forced gradient frequency neural networks,” Front. Comput. Neurosci., vol. 9, no. 152, 2015.


Edward Large
Director, Music Dynamics Laboratory
Professor of Psychological Sciences
University of Connecticut
edward.large@xxxxxxxxx
http://musicdynamicslab.uconn.edu