[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Online rhythm production experiments: Update



p.s

Henkjan discussed three types of systems in his previous message and referred to our method as the third type (post processing methods).
In fact our technology is fully realizing what he called type 2 (client side software) as it can be directly used from the browser with no additional hardware or software.  The trick is to use signal processing and computer audio in ways that are explained in the paper: https://www.biorxiv.org/content/10.1101/2021.01.15.426897v2

Very best,
Nori



Nori Jacoby

Max Planck Group Leader, “Computational Auditory Perception”

Max Planck Institute for Empirical Aesthetics

Grüneburgweg 14, 60322 Frankfurt am Main, Germany

nori.jacoby@xxxxxxxxx +49 69 8300479-820


From: Jacoby, Nori
Sent: Tuesday, January 26, 2021 5:27:41 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: [AUDITORY] Online rhythm production experiments: Update
 

Dear auditory list,

We would like to share this preprint to present and validate a recently developed technology in our group: REPP (Rhythm ExPeriment Platform). This technology is able to measure SMS in online experiments with high temporal fidelity while also working efficiently using hardware and software available to most participants online.  We give details on the technology in the preprint as well as validate it in a series of calibration and behavioral experiments. We demonstrate that our technology achieves high temporal accuracy (latency and jitter within 2 ms on average) and high test-retest reliability both in the laboratory and online.  We plan to release this technology as a free and open-source framework alongside the journal version of the paper. 

This technology is fully automated and customizable, enabling researchers to monitor online experiments in real time and to implement a wide variety of SMS paradigms. For example, using REPP we successfully replicated online a transmission chain experiment to estimate perceptual priors for simple rhythms via iterated reproduction of random temporal sequences. In a recent paper, we also show that REPP can be used to collect a large tapping dataset with more than 500 participants to study individual differences on SMS in the general population (Niarchou et al., 2021). We therefore believe this technology can support SMS experiments that would be nearly impossible in the lab while massively increasing the scalability and speed of data collection. 

https://www.biorxiv.org/content/10.1101/2021.01.15.426897v2

 
Very best,
Nori Jacoby, Manuel Anglada-Tort, Peter Harrison


Nori Jacoby

Max Planck Group Leader, “Computational Auditory Perception”

Max Planck Institute for Empirical Aesthetics

Grüneburgweg 14, 60322 Frankfurt am Main, Germany

nori.jacoby@xxxxxxxxx +49 69 8300479-820



---------- Forwarded message ---------
From: Prof. dr Henkjan Honing <honing@xxxxxx>
Date: Mon, Oct 26, 2020 at 5:24 AM
Subject: [AUDITORY] Online rhythm production experiments: Update
To: AUDITORY@xxxxxxxxxxxxxxx <AUDITORY@xxxxxxxxxxxxxxx>



Thanks for the suggestions. Below a brief summary of the responses I received. These came in three flavors:

1) solutions suggesting specific hardware at the client side (e.g. using e.g., a two channel audio card)
2) solutions using client side software (e.g., _javascript_)
3) offline and/or post-processing solutions

For our purpose (relatively large-scale online rhythm production experiments) solution type 1 is unrealistic. 
[input from Werner Hemmert and others]

Solution type 2 was tried by several researchers/institutes (using, e.g, PsychoPY _javascript_, etc.). However, most report - as expected - relatively large timing errors, largely due to keyboard scan rates, drivers, and/or operating system (as reported in the references mentioned in the original message). (Despite the claim of psychopy.org of <4ms precision in online studies). 
[Input from Ignacio Spiousas, Nick Haywood, Ben Schultz, Kyle Jasmin and others]
N.B. PeerJ recently published a comparative study [1]

Solution type 3 was suggested by some: i.e. o record the rhythmic pattern by tapping e.g. with a pencil on your desk or device microphone, along with the streamed sound, at the client side, upload the resulting audio file using a standard browser, and analyse it at the serverside using onset-detection and some crosscorrelation techniques. Depending on the sampling rate, latencies can be reduced to 1 ms or less. 
[Input from Roger Dannenberg, Krzysztof Basiński, Justin London and others] 

N.B.1 Ben Schultz announced to make his version of Solution type 1 available as open source (repeated below).
N.B.2. Nori Jacoby announced to make their version of Solution type 3 available as appendix to a forthcoming paper (repreated below).

Nevertheless, my hope is still on some elegant solution of type 2. If you have one, please let us know.

Best,

Henkjan Honing

.
University of Amsterdam
Faculty of Humanities 
Faculty of Science
.

——

Subject: RE: Online rhythm production experiments
Date: 20 October 2020 at 07:16:47 CEST

Hi Henkjan and list,
 
I managed to get the latency and variability synced with audio/video down to the variability of the input device (~8ms for keyboards, larger for touch screens and dependent on the model). I have integrated this with html and _javascript_ in Qualtrics and performed benchmark tests using an automated responder. Response times do not appear to be affected by internet connection speeds (but I have not yet tried dial-up).
 
I am in the process of writing the manuscript with the benchmarks for publication and the scripts will be open-source. These could be adapted for any webpage. 
 
Best regards,
Ben


From: "Jacoby, Nori" <nori.jacoby@xxxxxxxxx>
Subject: Re: Online rhythm production experiments
Date: 20 October 2020 at 16:10:20 CEST
Reply-To: "Jacoby, Nori" <nori.jacoby@xxxxxxxxx>

Hi Henkjan and everybody,

My research group has developed a technology that has solved this problem and allowed us to collect reliable tapping data in an online setup. We’ve successfully collected large tapping datasets this way, and we believe that our method fully addresses the issues mentioned in this thread (low latency and jitter) while also being practical in terms of realistic online data collection. We plan to publish a preprint by the end of the year and therefore make the details of the technology accessible to everyone soon. If you are interested in using the technology earlier, please contact me.

Very best,
Nori Jacoby

Max Planck Group Leader, “Computational Auditory Perception”
Max Planck Institute for Empirical Aesthetics
Grüneburgweg 14, 60322 Frankfurt am Main, Germany
nori.jacoby@xxxxxxxxx +49 69 8300479-820


--
Nori Jacoby
Max Planck Group Leader, “Computational Auditory Perception
Max Planck Institute for Empirical Aesthetics
Grüneburgweg 14, 60322 Frankfurt am Main, Germany.