Retractions and failures to replicate are signs of weak research. But they're also signs of laudable and necessary efforts to identify weak research and improve future research. The #Trump admin is systematically weaponizing these efforts to cast doubt on science as such.
"Research-integrity sleuths say their work is being ‘twisted’ to undermine science."
https://www.nature.com/articles/d41586-025-02163-z
@elduvelle Sure - let’s add to the problem of reproducibility (retraction rates are already much higher for high-impact journals, incl. Nature – Brembs et al 2013 "Deep impact: unintended consequences of journal rank." Frontiers in human Neuroscience) by adding AI peer reviewers and watch academic publishing enshittify further. #reproducibility #impactfactor #ScientificJournals
This bluesky thread by Mark Hanson describes a massive project to look at #reproducibility of drosophila studies - I don't know much about that area, but the approach is pretty exciting and has potential for other topics. https://bsky.app/profile/hansonmark.bsky.social/post/3ltlvkkxyak2k
The killer quote! Prestigious institutions and prestigious journals drive irreproducibility in the life sciences - well, at least in this particular sample.
And yet another one in the ever increasing list of analyses showing that top journals are bad for science:
"Thus, our analysis show major claims published in low-impact journals are significantly more likely to be reproducible than major claims published in trophy journals. "
To my knowledge, first time that not only prestigious journals, but also prestigious institutions are implicated as major drivers of irreproducibility:
"Higher representation of challenged claims in trophy journals and from top universities"
Starting your journey in scientific research?
Join the beginner-friendly tutorial at #EuroSciPy2025: Managing Scientific Data and Workflows with DataLad
Led by Ole Bialas & Michał Szczepanik
In this hands-on session, you'll learn: How to organize & track your data
How to repeat your experiments reliably
How to share your results with others
No Git experience required—just curiosity and Python basics! https://euroscipy.org/schedule
#FAIRdata #Reproducibility #Python
Tips for using Jupyter notebooks as part of a reproducible workflow (one that goes from raw data to research article with a single command): https://docs.calkit.org/notebooks
We invite staff and students at the University of #Groningen to share how they are making #research or #teaching more open, accessible, transparent, or reproducible, for the 6th annual #OpenResearch Award.
Looking for inspiration?
Explore the case studies submitted in previous years: https://www.rug.nl/research/openscience/open-research-award/previous-events
More info: https://www.rug.nl/research/openscience/open-research-award/
#OpenScience #OpenEducation #OpenAccess #Reproducibility
@oscgroningen
Jack Taylor is now presenting a new #Rstats package: "LexOPS: A Reproducible Solution to Stimuli Selection". Jack bravely did a live demonstration based on a German corpus ("because we're in Germany") that generated matched stimuli that certainly made the audience giggle... let's just say that one match involved the word "Erektion"...
There is a paper about the LexOPS package: https://link.springer.com/article/10.3758/s13428-020-01389-1 and a detailed tutorial: https://jackedtaylor.github.io/LexOPSdocs/index.html. Also a #Shiny app for those who really don't want to use R, but that allows code download for #reproducibility: https://jackedtaylor.github.io/LexOPSdocs/ Really cool and useful project! #WoReLa1 #linguistics #psycholinguistics
#statstab #378 Selective Inference: The Silent Killer of Replicability
Thoughts: Benjamin overviews the replicability crisis, alternatives to p-values (and their issues), and suggests selective reporting is a large issue itself.
#replication #crisis #pvalue #nhst #selectivereporting #replicability #reproducibility
Just pushed some updates. I have posted the #verilog serial link code. This involved getting a set of scripts for simulation set up.
If anybody is interested in helping on this project, I'd love to have somebody try to follow the readme.md to install the prerequisites and run the ./run_sim script.
The goal would be to take notes and flesh out what things aren't in the prerequisite list and how to install them.
I'd be happy to do the same for somebody else's project.
To ensure AI is genuinely open source, we must have complete access to:
1. The datasets used for its training and evaluation
2. The underlying code
3. The design of the model
4. The model's parameters
Without these elements, transparency and the ability to replicate results remain incomplete.