<h3id="orgc4f6c50">"Thoughts" on language/software stability</h3>
<divclass="outline-text-3"id="text-orgc4f6c50">
<p>
As we explained, the programming language used in an analysis has a
clear influence on the reproducibility of your analysis. It is not a
characteristic of the language itself but rather a consequence of the
development philosophy of the underlying community. For example C is a
very stable language with a <ahref="https://en.wikipedia.org/wiki/C_(programming_language)#ANSI_C_and_ISO_C">very clear specification designed by a
committee</a> (even though some compilers may not respect this norm).
</p>
<p>
On the other end of the spectrum, <ahref="https://en.wikipedia.org/wiki/Python_(programming_language)">Python</a> had a much more organic
development based on a readability philosophy and valuing continuous
improvement over backwards-compatibility. Furthermore, Python is
commonly used as a wrapping language (e.g., to easily use C or FORTRAN
libraries) and has its own packaging system. All these design choices
tend to make reproducibility often a bit painful with Python, even
though the community is slowly taking this into account. The transition from Python 2 to the not fully backwards compatible Python 3 has been a particularly painful process, not least because the two languages are so similar that is it not always easy to figure out if a given script or module is written in Python 2 or Python 3. It isn't even rare to see Python scripts that work under both Python 2 and Python 3, but produce different results due to the change in the behavior of integer division.
</p>
<p>
<ahref="https://en.wikipedia.org/wiki/R_(programming_language)">R</a>, in comparison is much closer (in terms of developer community) to
languages like <ahref="https://en.wikipedia.org/wiki/SAS_(software)">SAS</a>, which is heavily used in the pharmaceutical
industry where statistical procedures need to be standardized and rock
solid/stable. R is obviously not immune to evolutions that break old
versions and hinder reproducibility/backward compatibility. Here is a
relatively recent <ahref="http://members.cbio.mines-paristech.fr/~thocking/HOCKING-reproducible-research-with-R.html">true story about this</a> and some colleagues who worked
on the <ahref="https://www.fun-mooc.fr/courses/UPSUD/42001S06/session06/about">statistics introductory course with R on FUN</a> reported us
several issues with a few functions (<code>plotmeans</code> from <code>gplots</code>,
<code>survfit</code> from <code>survival</code>, or <code>hclust</code>) whose default parameters had
changed over the years. It is thus probably good practice to give
explicit values for all parameters (which can be cumbersome) instead
of relying on default values, and to restrict your dependencies as much
as possible.
</p>
<p>
This being said, the R development community is generally quite
careful about stability. We (the authors of this MOOC) believe that open
source (which allows to inspect how computation is done and to
identify both mistakes and sources of non-reproducibility) is more
important than the rock solid stability of SAS, which is proprietary
software. Yet, if you really need to stay with SAS (similar solutions
probably exist for other languages as well), you should know that SAS
can be used within Jupyter using either the <ahref="https://sassoftware.github.io/sas_kernel/">Python SASKernel</a> or the
<ahref="https://sassoftware.github.io/saspy/">Python SASPy</a> package (step by step explanations about this are given
<ahref="https://app-learninglab.inria.fr/gitlab/85bc36e0a8096c618fbd5993d1cca191/mooc-rr/blob/master/documents/tuto_jupyter_windows/tuto_jupyter_windows.md">here</a>). Using such literate programming approach allied with systematic
<h3id="orgabfd56a">Controlling your software environment</h3>
<divclass="outline-text-3"id="text-orgabfd56a">
<p>
As we mentioned in the video sequences, there are several solutions to
control your environment:
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;">The easy (preserve the mess) ones: <ahref="http://www.pgbovine.net/cde.html">CDE</a> or <ahref="https://vida-nyu.github.io/reprozip/">ReproZip</a></li>
<listyle="margin-bottom:0;">The more demanding (encourage cleanliness) where you start with a
clean environment and install only what's strictly necessary (and document it):
<ulclass="org-ul">
<listyle="margin-bottom:0;">The very well known <ahref="https://www.docker.io/">Docker</a></li>
<listyle="margin-bottom:0;"><ahref="https://singularity.lbl.gov/">Singularity</a> or <ahref="https://spack.io/">Spack</a>, which are more targeted toward the specific
needs of high performance computing users</li>
<listyle="margin-bottom:0;"><ahref="https://www.gnu.org/software/guix/">Guix</a>, <ahref="https://nixos.org/">Nix</a> that are very clean (perfect?) solutions to this
dependency hell and which we recommend</li>
</ul></li>
</ul>
<p>
It may be hard to understand the difference between these different
approaches and decide which one is better in your context.
</p>
<p>
Here is a webinar where some of these tools are demoed in a
reproducible research context: <ahref="https://github.com/alegrand/RR_webinars/blob/master/2_controling_your_environment/index.org">Controling your environment (by Michael
Mercier and Cristian Ruiz)</a>
</p>
<p>
You may also want to have a look at <ahref="http://falsifiable.us/">the Popper conventions</a> (<ahref="https://github.com/alegrand/RR_webinars/blob/master/11_popper/index.org">webinar by
Ivo Gimenez through google hangout</a>) or at the <ahref="https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org">presentation of Konrad
Hinsen on Active Papers</a> (<ahref="http://www.activepapers.org/">http://www.activepapers.org/</a>).
Ensuring software is properly archived, i.e, is safely stored so that
it can be accessed in a perennial way, can be quite tricky. If you
have never seen <ahref="https://github.com/alegrand/RR_webinars/blob/master/5_archiving_software_and_data/index.org">Roberto Di Cosmo presenting the Software Heritage
project</a>, this is a must see. <ahref="https://www.softwareheritage.org/">https://www.softwareheritage.org/</a>
</p>
<p>
For regular data, we highly recommend using <ahref="https://www.zenodo.org/">https://www.zenodo.org/</a>
You may want to have a look at this webinar: <ahref="https://github.com/alegrand/RR_webinars/blob/master/6_reproducibility_bioinformatics/index.org">Reproducible Science in
Bio-informatics: Current Status, Solutions and Research Opportunities
(by Sarah Cohen Boulakia, Yvan Le Bras and Jérôme Chopard).</a>
<h3id="orgdacce8f">Numerical and statistical issues</h3>
<divclass="outline-text-3"id="text-orgdacce8f">
<p>
We have mentioned these topics in our MOOC but we could by no way
cover them properly. We only suggest here a few interesting talks
about this.
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/10_statistics_and_replication_in_HCI/index.org">In this talk, Pierre Dragicevic provides a nice illustration of the
consequences of statistical uncertainty and of how some concepts
(e.G. p-values) are commonly badly understood.</a></li>
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/3_numerical_reproducibility/index.org">Nathalie Revol, Philippe Langlois and Stef Graillat present the main
challenges encountered when trying to achieve numerical
reproducibility and present recent research work on this topic.</a></li>
You may want to have a look at the following two webinars:
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/8_artifact_evaluation/index.org">Enabling open and reproducible research at computer systems’
conferences (by Grigori Fursin)</a>. In particular, this talk discusses
<i>artifact evaluation</i> that is becoming more and more popular.</li>
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org">Publication Modes Favoring Reproducible Research (by Konrad Hinsen
and Nicolas Rougier)</a>. In this talk, the motivation for the <ahref="http://rescience.github.io/">ReScience
journal</a> initiative are presented.</li>
<listyle="margin-bottom:0;"><ahref="https://www.youtube.com/watch?v=HuJ2G8rXHMs">Simine Vazire - When Should We be Skeptical of Scientific Claims?</a>,
which is discussing publication practices in social sciences and in
particular HARKing (Hypothesizing After the Results are Known),
Experimentation was not covered in this MOOC, although it is an
essential part of science. The main reason is that practices and
constraints can vary so wildly from one domain to another that it could
not be properly covered in a first edition. We would be happy to
gather references you consider as interesting in your domain so do not
hesitate to provide us with such references by using the forum and we
will update this page.
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/9_experimental_testbeds/index.org">A recent talk by Lucas Nussbaum on Experimental Testbeds in Computer
<h3id="orgfa4dc3a">Getting the list of installed packages and their version</h3>
<divclass="outline-text-3"id="text-orgfa4dc3a">
<p>
This topic is discussed on <ahref="https://stackoverflow.com/questions/20180543/how-to-check-version-of-python-modules">StackOverflow</a>. When using <code>pip</code> (the Python
package installer) within a shell command, it is easy to query the
<h3id="orgb52d0ce">Installing a new package or a specific version</h3>
<divclass="outline-text-3"id="text-orgb52d0ce">
<p>
This section is mostly a cut and paste from the <ahref="https://support.rstudio.com/hc/en-us/articles/219949047-Installing-older-versions-of-packages">recent post by Ian
Pylvainen</a> on this topic. It comprises a very clear explanation of how
...
...
@@ -954,9 +751,9 @@ to proceed.
</p>
</div>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><aid="org1d1770e"></a>Installing a pre-compiled version<br/>
thoughthecommunityisslowlytakingthisintoaccount.ThetransitionfromPython2tothenotfullybackwardscompatiblePython3hasbeenaparticularlypainfulprocess,notleastbecausethetwolanguagesaresosimilarthatisitnotalwayseasytofigureoutifagivenscriptormoduleiswritteninPython2orPython3.Itisn't even rare to see Python scripts that work under both Python 2 and Python 3, but produce different results due to the change in the behavior of integer division.
[[https://en.wikipedia.org/wiki/R_(programming_language)][R]], in comparison is much closer (in terms of developer community) to
languages like [[https://en.wikipedia.org/wiki/SAS_(software)][SAS]], which is heavily used in the pharmaceutical
industry where statistical procedures need to be standardized and rock
solid/stable. R is obviously not immune to evolutions that break old
versions and hinder reproducibility/backward compatibility. Here is a
relatively recent [[http://members.cbio.mines-paristech.fr/~thocking/HOCKING-reproducible-research-with-R.html][true story about this]] and some colleagues who worked
on the [[https://www.fun-mooc.fr/courses/UPSUD/42001S06/session06/about][statistics introductory course with R on FUN]] reported us
several issues with a few functions (=plotmeans= from =gplots=,
=survfit= from =survival=, or =hclust=) whose default parameters had
changed over the years. It is thus probably good practice to give
explicit values for all parameters (which can be cumbersome) instead
of relying on default values, and to restrict your dependencies as much
as possible.
This being said, the R development community is generally quite
careful about stability. We (the authors of this MOOC) believe that open
source (which allows to inspect how computation is done and to
identify both mistakes and sources of non-reproducibility) is more
important than the rock solid stability of SAS, which is proprietary
software. Yet, if you really need to stay with SAS (similar solutions
probably exist for other languages as well), you should know that SAS
can be used within Jupyter using either the [[https://sassoftware.github.io/sas_kernel/][Python SASKernel]] or the
[[https://sassoftware.github.io/saspy/][Python SASPy]] package (step by step explanations about this are given
[[https://app-learninglab.inria.fr/gitlab/85bc36e0a8096c618fbd5993d1cca191/mooc-rr/blob/master/documents/tuto_jupyter_windows/tuto_jupyter_windows.md][here]]). Using such literate programming approach allied with systematic
version and environment control will always help.
** Controlling your software environment
As we mentioned in the video sequences, there are several solutions to
control your environment:
- The easy (preserve the mess) ones: [[http://www.pgbovine.net/cde.html][CDE]] or [[https://vida-nyu.github.io/reprozip/][ReproZip]]
- The more demanding (encourage cleanliness) where you start with a
clean environment and install only what'sstrictlynecessary(anddocumentit):
<h2id="org3b8ed57">"Thoughts" on language/software stability</h2>
<divclass="outline-text-2"id="text-org3b8ed57">
<p>
As we explained, the programming language used in an analysis has a
clear influence on the reproducibility of your analysis. It is not a
characteristic of the language itself but rather a consequence of the
development philosophy of the underlying community. For example C is a
very stable language with a <ahref="https://en.wikipedia.org/wiki/C_(programming_language)#ANSI_C_and_ISO_C">very clear specification designed by a
committee</a> (even though some compilers may not respect this norm).
</p>
<p>
On the other end of the spectrum, <ahref="https://en.wikipedia.org/wiki/Python_(programming_language)">Python</a> had a much more organic
development based on a readability philosophy and valuing continuous
improvement over backwards-compatibility. Furthermore, Python is
commonly used as a wrapping language (e.g., to easily use C or FORTRAN
libraries) and has its own packaging system. All these design choices
tend to make reproducibility often a bit painful with Python, even
though the community is slowly taking this into account. The transition from Python 2 to the not fully backwards compatible Python 3 has been a particularly painful process, not least because the two languages are so similar that is it not always easy to figure out if a given script or module is written in Python 2 or Python 3. It isn't even rare to see Python scripts that work under both Python 2 and Python 3, but produce different results due to the change in the behavior of integer division.
</p>
<p>
<ahref="https://en.wikipedia.org/wiki/R_(programming_language)">R</a>, in comparison is much closer (in terms of developer community) to
languages like <ahref="https://en.wikipedia.org/wiki/SAS_(software)">SAS</a>, which is heavily used in the pharmaceutical
industry where statistical procedures need to be standardized and rock
solid/stable. R is obviously not immune to evolutions that break old
versions and hinder reproducibility/backward compatibility. Here is a
relatively recent <ahref="http://members.cbio.mines-paristech.fr/~thocking/HOCKING-reproducible-research-with-R.html">true story about this</a> and some colleagues who worked
on the <ahref="https://www.fun-mooc.fr/courses/UPSUD/42001S06/session06/about">statistics introductory course with R on FUN</a> reported us
several issues with a few functions (<code>plotmeans</code> from <code>gplots</code>,
<code>survfit</code> from <code>survival</code>, or <code>hclust</code>) whose default parameters had
changed over the years. It is thus probably good practice to give
explicit values for all parameters (which can be cumbersome) instead
of relying on default values, and to restrict your dependencies as much
as possible.
</p>
<p>
This being said, the R development community is generally quite
careful about stability. We (the authors of this MOOC) believe that open
source (which allows to inspect how computation is done and to
identify both mistakes and sources of non-reproducibility) is more
important than the rock solid stability of SAS, which is proprietary
software. Yet, if you really need to stay with SAS (similar solutions
probably exist for other languages as well), you should know that SAS
can be used within Jupyter using either the <ahref="https://sassoftware.github.io/sas_kernel/">Python SASKernel</a> or the
<ahref="https://sassoftware.github.io/saspy/">Python SASPy</a> package (step by step explanations about this are given
<ahref="https://app-learninglab.inria.fr/gitlab/85bc36e0a8096c618fbd5993d1cca191/mooc-rr/blob/master/documents/tuto_jupyter_windows/tuto_jupyter_windows.md">here</a>). Using such literate programming approach allied with systematic
<h2id="org1d2d532">Controlling your software environment</h2>
<divclass="outline-text-2"id="text-org1d2d532">
<p>
As we mentioned in the video sequences, there are several solutions to
control your environment:
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;">The easy (preserve the mess) ones: <ahref="http://www.pgbovine.net/cde.html">CDE</a> or <ahref="https://vida-nyu.github.io/reprozip/">ReproZip</a></li>
<listyle="margin-bottom:0;">The more demanding (encourage cleanliness) where you start with a
clean environment and install only what's strictly necessary (and document it):
<ulclass="org-ul">
<listyle="margin-bottom:0;">The very well known <ahref="https://www.docker.io/">Docker</a></li>
<listyle="margin-bottom:0;"><ahref="https://singularity.lbl.gov/">Singularity</a> or <ahref="https://spack.io/">Spack</a>, which are more targeted toward the specific
needs of high performance computing users</li>
<listyle="margin-bottom:0;"><ahref="https://www.gnu.org/software/guix/">Guix</a>, <ahref="https://nixos.org/">Nix</a> that are very clean (perfect?) solutions to this
dependency hell and which we recommend</li>
</ul></li>
</ul>
<p>
It may be hard to understand the difference between these different
approaches and decide which one is better in your context.
</p>
<p>
Here is a webinar where some of these tools are demoed in a
reproducible research context: <ahref="https://github.com/alegrand/RR_webinars/blob/master/2_controling_your_environment/index.org">Controling your environment (by Michael
Mercier and Cristian Ruiz)</a>
</p>
<p>
You may also want to have a look at <ahref="http://falsifiable.us/">the Popper conventions</a> (<ahref="https://github.com/alegrand/RR_webinars/blob/master/11_popper/index.org">webinar by
Ivo Gimenez through google hangout</a>) or at the <ahref="https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org">presentation of Konrad
Hinsen on Active Papers</a> (<ahref="http://www.activepapers.org/">http://www.activepapers.org/</a>).
Ensuring software is properly archived, i.e, is safely stored so that
it can be accessed in a perennial way, can be quite tricky. If you
have never seen <ahref="https://github.com/alegrand/RR_webinars/blob/master/5_archiving_software_and_data/index.org">Roberto Di Cosmo presenting the Software Heritage
project</a>, this is a must see. <ahref="https://www.softwareheritage.org/">https://www.softwareheritage.org/</a>
</p>
<p>
For regular data, we highly recommend using <ahref="https://www.zenodo.org/">https://www.zenodo.org/</a>
You may want to have a look at this webinar: <ahref="https://github.com/alegrand/RR_webinars/blob/master/6_reproducibility_bioinformatics/index.org">Reproducible Science in
Bio-informatics: Current Status, Solutions and Research Opportunities
(by Sarah Cohen Boulakia, Yvan Le Bras and Jérôme Chopard).</a>
<h2id="orgad41259">Numerical and statistical issues</h2>
<divclass="outline-text-2"id="text-orgad41259">
<p>
We have mentioned these topics in our MOOC but we could by no way
cover them properly. We only suggest here a few interesting talks
about this.
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/10_statistics_and_replication_in_HCI/index.org">In this talk, Pierre Dragicevic provides a nice illustration of the
consequences of statistical uncertainty and of how some concepts
(e.G. p-values) are commonly badly understood.</a></li>
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/3_numerical_reproducibility/index.org">Nathalie Revol, Philippe Langlois and Stef Graillat present the main
challenges encountered when trying to achieve numerical
reproducibility and present recent research work on this topic.</a></li>
You may want to have a look at the following two webinars:
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/8_artifact_evaluation/index.org">Enabling open and reproducible research at computer systems’
conferences (by Grigori Fursin)</a>. In particular, this talk discusses
<i>artifact evaluation</i> that is becoming more and more popular.</li>
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org">Publication Modes Favoring Reproducible Research (by Konrad Hinsen
and Nicolas Rougier)</a>. In this talk, the motivation for the <ahref="http://rescience.github.io/">ReScience
journal</a> initiative are presented.</li>
<listyle="margin-bottom:0;"><ahref="https://www.youtube.com/watch?v=HuJ2G8rXHMs">Simine Vazire - When Should We be Skeptical of Scientific Claims?</a>,
which is discussing publication practices in social sciences and in
particular HARKing (Hypothesizing After the Results are Known),
Experimentation was not covered in this MOOC, although it is an
essential part of science. The main reason is that practices and
constraints can vary so wildly from one domain to another that it could
not be properly covered in a first edition. We would be happy to
gather references you consider as interesting in your domain so do not
hesitate to provide us with such references by using the forum and we
will update this page.
</p>
<ulclass="org-ul">
<listyle="margin-bottom:0;"><ahref="https://github.com/alegrand/RR_webinars/blob/master/9_experimental_testbeds/index.org">A recent talk by Lucas Nussbaum on Experimental Testbeds in Computer
As we explained, the programming language used in an analysis has a
clear influence on the reproducibility of your analysis. It is not a
characteristic of the language itself but rather a consequence of the
development philosophy of the underlying community. For example C is a
very stable language with a [[https://en.wikipedia.org/wiki/C_(programming_language)#ANSI_C_and_ISO_C][very clear specification designed by a
committee]] (even though some compilers may not respect this norm).
On the other end of the spectrum, [[https://en.wikipedia.org/wiki/Python_(programming_language)][Python]] had a much more organic
development based on a readability philosophy and valuing continuous
improvement over backwards-compatibility. Furthermore, Python is
commonly used as a wrapping language (e.g., to easily use C or FORTRAN
libraries) and has its own packaging system. All these design choices
tend to make reproducibility often a bit painful with Python, even
though the community is slowly taking this into account. The transition from Python 2 to the not fully backwards compatible Python 3 has been a particularly painful process, not least because the two languages are so similar that is it not always easy to figure out if a given script or module is written in Python 2 or Python 3. It isn't even rare to see Python scripts that work under both Python 2 and Python 3, but produce different results due to the change in the behavior of integer division.
[[https://en.wikipedia.org/wiki/R_(programming_language)][R]], in comparison is much closer (in terms of developer community) to
languages like [[https://en.wikipedia.org/wiki/SAS_(software)][SAS]], which is heavily used in the pharmaceutical
industry where statistical procedures need to be standardized and rock
solid/stable. R is obviously not immune to evolutions that break old
versions and hinder reproducibility/backward compatibility. Here is a
relatively recent [[http://members.cbio.mines-paristech.fr/~thocking/HOCKING-reproducible-research-with-R.html][true story about this]] and some colleagues who worked
on the [[https://www.fun-mooc.fr/courses/UPSUD/42001S06/session06/about][statistics introductory course with R on FUN]] reported us
several issues with a few functions (=plotmeans= from =gplots=,
=survfit= from =survival=, or =hclust=) whose default parameters had
changed over the years. It is thus probably good practice to give
explicit values for all parameters (which can be cumbersome) instead
of relying on default values, and to restrict your dependencies as much
as possible.
This being said, the R development community is generally quite
careful about stability. We (the authors of this MOOC) believe that open
source (which allows to inspect how computation is done and to
identify both mistakes and sources of non-reproducibility) is more
important than the rock solid stability of SAS, which is proprietary
software. Yet, if you really need to stay with SAS (similar solutions
probably exist for other languages as well), you should know that SAS
can be used within Jupyter using either the [[https://sassoftware.github.io/sas_kernel/][Python SASKernel]] or the
[[https://sassoftware.github.io/saspy/][Python SASPy]] package (step by step explanations about this are given
[[https://app-learninglab.inria.fr/gitlab/85bc36e0a8096c618fbd5993d1cca191/mooc-rr/blob/master/documents/tuto_jupyter_windows/tuto_jupyter_windows.md][here]]). Using such literate programming approach allied with systematic
version and environment control will always help.
* Controlling your software environment
As we mentioned in the video sequences, there are several solutions to
control your environment:
- The easy (preserve the mess) ones: [[http://www.pgbovine.net/cde.html][CDE]] or [[https://vida-nyu.github.io/reprozip/][ReproZip]]
- The more demanding (encourage cleanliness) where you start with a
clean environment and install only what's strictly necessary (and document it):
- The very well known [[https://www.docker.io/][Docker]]
- [[https://singularity.lbl.gov/][Singularity]] or [[https://spack.io/][Spack]], which are more targeted toward the specific
needs of high performance computing users
- [[https://www.gnu.org/software/guix/][Guix]], [[https://nixos.org/][Nix]] that are very clean (perfect?) solutions to this
dependency hell and which we recommend
It may be hard to understand the difference between these different
approaches and decide which one is better in your context.
Here is a webinar where some of these tools are demoed in a
reproducible research context: [[https://github.com/alegrand/RR_webinars/blob/master/2_controling_your_environment/index.org][Controling your environment (by Michael
Mercier and Cristian Ruiz)]]
You may also want to have a look at [[http://falsifiable.us/][the Popper conventions]] ([[https://github.com/alegrand/RR_webinars/blob/master/11_popper/index.org][webinar by
Ivo Gimenez through google hangout]]) or at the [[https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org][presentation of Konrad
Hinsen on Active Papers]] (http://www.activepapers.org/).
* Preservation/Archiving
Ensuring software is properly archived, i.e, is safely stored so that
it can be accessed in a perennial way, can be quite tricky. If you
have never seen [[https://github.com/alegrand/RR_webinars/blob/master/5_archiving_software_and_data/index.org][Roberto Di Cosmo presenting the Software Heritage
project]], this is a must see. https://www.softwareheritage.org/
For regular data, we highly recommend using https://www.zenodo.org/
whenever the data is not sensitive.
* Workflows
In the video sequences, we mentioned workflow managers (original application domain in parenthesis):
You may want to have a look at this webinar: [[https://github.com/alegrand/RR_webinars/blob/master/6_reproducibility_bioinformatics/index.org][Reproducible Science in
Bio-informatics: Current Status, Solutions and Research Opportunities
(by Sarah Cohen Boulakia, Yvan Le Bras and Jérôme Chopard).]]
* Numerical and statistical issues
We have mentioned these topics in our MOOC but we could by no way
cover them properly. We only suggest here a few interesting talks
about this.
- [[https://github.com/alegrand/RR_webinars/blob/master/10_statistics_and_replication_in_HCI/index.org][In this talk, Pierre Dragicevic provides a nice illustration of the
consequences of statistical uncertainty and of how some concepts
(e.G. p-values) are commonly badly understood.]]
- [[https://github.com/alegrand/RR_webinars/blob/master/3_numerical_reproducibility/index.org][Nathalie Revol, Philippe Langlois and Stef Graillat present the main
challenges encountered when trying to achieve numerical
reproducibility and present recent research work on this topic.]]
* Publication practices
You may want to have a look at the following two webinars:
- [[https://github.com/alegrand/RR_webinars/blob/master/8_artifact_evaluation/index.org][Enabling open and reproducible research at computer systems’
conferences (by Grigori Fursin)]]. In particular, this talk discusses
/artifact evaluation/ that is becoming more and more popular.
- [[https://github.com/alegrand/RR_webinars/blob/master/7_publications/index.org][Publication Modes Favoring Reproducible Research (by Konrad Hinsen
and Nicolas Rougier)]]. In this talk, the motivation for the [[http://rescience.github.io/][ReScience
journal]] initiative are presented.
- [[https://www.youtube.com/watch?v=HuJ2G8rXHMs][Simine Vazire - When Should We be Skeptical of Scientific Claims?]],
which is discussing publication practices in social sciences and in
particular HARKing (Hypothesizing After the Results are Known),
p-hacking, etc.
* Experimentation
Experimentation was not covered in this MOOC, although it is an
essential part of science. The main reason is that practices and
constraints can vary so wildly from one domain to another that it could
not be properly covered in a first edition. We would be happy to
gather references you consider as interesting in your domain so do not
hesitate to provide us with such references by using the forum and we
will update this page.
- [[https://github.com/alegrand/RR_webinars/blob/master/9_experimental_testbeds/index.org][A recent talk by Lucas Nussbaum on Experimental Testbeds in Computer