[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Summary of discussion from ARO session on "Research and Teaching tools for Matlab"



All,

We feel the most effective way to promote better sharing of software is to make a strong link between software and the articles/research it is used for.  We expect that someone browsing through a software database or website is much more likely to recognize the utility of a piece of software  if he or she can see which articles were based on it.  It would probably be even more efficient if journal articles included a link to a database in which the software used to conduct the research is freely available for downloading. However, the latter requires authors to be comfortable sharing their software at the time of publication.  This might be difficult to achieve in very competitive fields of research.

I have been thinking a lot about these issues for several years now -- figuring out how to cite my own code, and figuring out how to attribute a fair share of credit to all other code authors, have been problems I have been directly faced with many times now and I am excited to see that I'm not alone in this.

In the recent past, I have become aware of several tools that have the potential to make this job easier:

First, if you host your code on GitHub, it is straightforward to assign your repository a DOI and thus make it more easily citeable: you can read about how to do this here.  This makes linking a published article and the code that was used in it very easy; "back-linking" is still a challenge (ie, visitors to your github page do not automatically know which papers used your code).

Second, a proposal that I found interesting for attributing credit to software was recently published.  It's theoretically interesting as a way of proportional credit assignment, and it may solve some of the back-linking problem if it ever takes off: Katz, D.S. & Smith, A.M., (2015). Transitive Credit and JSON-LD. Journal of Open Research Software. 3(1), p.e7. DOI: http://doi.org/10.5334/jors.by

Both of these ideas really rely on an open-source approach to research computing.  I consider my code and its development history a necessary "trust-but-verify" component of my work, so it seems natural to have it publicly available at time of publication.  While that opinion is not universal, tools like the Jupyter Notebook, which is "a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.", provide researchers with a fast and easy way to share results quickly and provide a fast verification step.  In this way, researchers can keep pace with the rapid progress of a competitive research field, without fully releasing their code until it's ready, while still giving their peers a deeper look into their results than can normally happen with, for example, static figures in a publication.

--
Graham Voysey
Boston University College of Engineering
HRC Research Engineer
Auditory Biophysics and Simulation Laboratory
Binaural Hearing Laboratory
ERB 413