From a Wash Post article on evidence humans were in N. America earlier than previously thought. I myself have a mixed-feelings middle-ground view on peer review, but I'm in a very different field.
"The peer-review process is designed to help validate scientific claims, but Lowery argues that in archaeology it often leads to a circle-the-wagon mentality, allowing scientists to wave away evidence that doesn’t support the dominant paradigm. He says he isn’t seeking formal publishing routes because “life’s too short,” comparing this aspect of academic science to “the dumbest game I’ve ever played.”"
measured within reviewer uncertainty (~ 1/ confidence) in journal paper reviews
predictions of greater reviewer uncertainty: gender, if the paper was a protocol
NOT predictive: reviewer experience, time taken on review, reviewer nationality, paper version (first submission vs revision), paper length, readability
I once peer-reviewed a journal paper and asked for minor revisions, only to find that they had printed the manuscript completely unchanged except for the title. When I told the editors that they had wasted my time, they explained that this author, a woman of about 40, was extremely scary and would get super angry if they made any demands on her.
Aspiring to greater intellectual humility in science
Rink Hoekstra & Simine Vazire, 2021
"We provide a set of recommendations on how to increase intellectual humility in research articles and highlight the central role peer reviewers can play in incentivizing authors to foreground the flaws and uncertainty in their work, thus enabling full and transparent evaluation of the validity of research."
So what would it take to publish a paper here on mastodon and do public peer review? Just an agreement to use a few hashtags like #Paper, and in replies things like #PeerReview, #Accept, #Revise, #Reject? Some automatically generated web and pdf output summarising the thread? Submission to something like Zenodo to give a DOI? Linking user accounts to orcid to verify identity? Only real problem I see is that even with markdown and LaTeX, Mastodon posts are not well suited for longer posts with multiple figures etc. Maybe fine for short results though?
Eine Forscherin beklagt, dass zuletzt einzelne Gutachter dreimal den #PeerReview ihrer Manuskripte (wahrscheinlich) für eigene Zwecke missbrauchten … Heute in unserem Blog: https://www.laborjournal.de/blog/?p=13563. Weitere Erfahrungen, Meinungen oder gar Vorschläge dazu?
“The problems with #overpublication, ‘publish or perish’ culture, abusive lab environments, analytical flexibility, p-hacking, clinical trial registration games, grant front-running, intellectual capture, #NonsenseJournals, #FakeJournals, #PeerReview manipulation, moral entrepreneurship, etc. precede the present discussions of paper mills and active falsification/fabrication cases. (1/2)
#academia became a turd that you can't flush down the toilet and keeps farting toxic gasses all over the place. Worst still, "we" keep playing the game, pretending that everything is fine and cheerily announcing another publication in a meaningless rat race of factors.
Did anyone receive this kind of review invitation? Perhaps you @j_bertolotti ?
Apparently, they started paying 20 $ for reviewing! It seems that finally, someone paid attention to all the complaints and the quality of the existing peer review process. The fee may seem too low, but here in Turkey 🇹🇷 it equals 650 Turkish lira, which is about one week's groceries. @academicsunite#academia#research#publishing#journals#peerreview
I figure there're multiple answers, likely also conflated with conference proceedings or even one could argue the first issue of Phil Trans B (1665). But hit me with your impression of "the oldest special issue" in #ScientificPublishing ?
We were surprised by a recent #Frontiers
blog. They make derogatory statements, accuse us of data manipulation & mischaracterize our comms with them. 😔
Last week I attended the 6th Perspectives on Scientific Error Conference at @TUEindhoven
I learned so much! About #metascience#preregistration#replicability#qrp questionable research practices, methods to detect data fabrication, #peerreview, #poweranalysis artefacts in #ML machine learning...
I'm impressed by the commitment of participants to improve science through error detection & prevention. Thanks to the organizers Noah van Dongen, @lakens@annescheel Felipe Romero and @annaveer
You'll be working with another reviewer to read and run the code, make sure it fills a basic checklist which usually only takes a few hours, and beyond that whatever youd like to focus on. Both of these are collaborative review processes where the goal is to help these packages be usable, well documented, and maintainable for the overall health of free scientific software.
Its fun, I promise! Happy to answer questions and boosts welcome.
Edit: feel free to volunteer as a reply here, DM me, or commenting on those issues! Anyone is welcome! Some experience with the language required, but other than that I can coach you through the rest.
One thing that sucks about #PeerReview being so broken and a vector of domination rather that cooperation is that, in the best case, they can be skillshares as much as anything else. In some code reviews I have given and received, I have taught and learned how to do things that I or the other person wished they knew how to do, but didnt.
That literally cant happen in the traditional model of review, where reviews are strict, terse, and noninteractive. Traditional review also happens way too late, when all the projected work is done. Collaborative, open, early review literally inverts the dreaded "damn reviewers want us to do infinity more experiments" dynamic. Instead, wouldnt it be lovely if during or even before you do an experiment, having a designated person to be like "hey have you thought about doing it this way? If not i can show you how"
The adversarial system forces you into a position where you have to defend your approach as The Correct One and any change in your Genius Tier experimental design must be only to validate the basic findings of the original design. Reviewers cannot be considered as collaborators, and thus have little incentive to review with any other spirit than "gatekeeper of science."
If instead we adopted some lessons from open source and thought of some parts of reviews as "pull requests" - where fixing a bug is somewhat the responsibility of the person who thinks it should be done differently, but then they also get credit for that work in the same way that the original authors do, we could
a) share techniques and knowledge between labs in a more systematic way,
b) have better outcomes from moving beyond the sole genius model of science,
c) avoid a ton of experimental waste from either unnecessary extra experiments or improperly done original experiments,
d) build a system of reviewing that actually rewards reviewers for being collegial and cooperative
edit: to be super clear here i know i am not saying anything new, just reflecting on it as i am doing an open review