MarkHanson

@MarkHanson@fediscience.org

#Biologist interested in #immunity and host #pathogen interactions @ University of Exeter, UK, using #Drosophila: more than melanogaster whenever possible. He/him. ๐Ÿ‡จ๐Ÿ‡ฆ

Also interested in the health of scientific publishing. Who do we give the keys to the car? Let's talk about ways to innovate publishing moving forward, and let's also clear the air about why some of the modern problems exist #AcademicChatter

Bluesky: @hansonmark.bsky.so

This profile is from a federated server and may be incomplete. Browse more on the original instance.

MarkHanson, to random

My reflections after being the driving guest-editor of a special issue โ€œSculpting the Microbiomeโ€

A thread ๐Ÿงต 1/n

Sculpting the Microbiome: https://royalsocietypublishing.org/doi/full/10.1098/rstb.2023.0057
We need a new special issue lexicon: https://mahansonresearch.weebly.com/blog/we-need-a-new-special-issue-lexicon

deevybee, to Pubtips
@deevybee@mastodon.social avatar
MarkHanson,

@deevybee the constant question (which I don't ask, but others do) is why eLife's APC is $2000 if it's not-for-profit, run by practicing scientists, etc...

I assume this is fair, and going to many good initiatives (including permanent staff salaries that support more careers in science!). But @albertcardona do you know better what I could say in response to these questions? What percent of the APC is directed to pay for costs X, Y, and Z?

MarkHanson,

@steveroyle @BorisBarbour @deevybee @albertcardona @richardsever thanks! Curious if a 2024 update would discuss a different landscape and associated costs. Great to have a reference that spells it out if nothing more :)

MarkHanson,

@albertcardona @deevybee 100% charge for submission! Def agree with the blog.

But at the same time, one would need to do a far better job of journal accountability when a journal desk rejects. Perhaps a tiered charge where you pay a deposit on submission, then you get refunded the vast majority of it if you're desk-rejected. I certainly wouldn't want to pay anything to be desk-rejected (esp. for the reasons I have been in the past). So this really can't be a substantial charge ($50 MAX).

MarkHanson,

@albertcardona @deevybee oh for sure. Desk rejects are necessary and a fact of science life. At the same time our strain on publishing preprint was desk-rejected twice before we got into review. The given reason both times not being of "sufficient general interest." The second time we'd already 'taken off' globally, so to speak.

I think most editors I've ever spoken to have said "it's a thankless job" and also "and I likely got things wrong sometimes." Such is the nature of desk rejecting.

MarkHanson, to random

What's the oldest "special issue" you know of?

I figure there're multiple answers, likely also conflated with conference proceedings or even one could argue the first issue of Phil Trans B (1665). But hit me with your impression of "the oldest special issue" in ?

MarkHanson, to random

We were surprised by a recent

blog. They make derogatory statements, accuse us of data manipulation & mischaracterize our comms with them. ๐Ÿ˜”

Critiques of our work are welcome. Falsehoods about us and our work are not. Here we set the record straight.
https://the-strain-on-scientific-publishing.github.io/website/posts/response_to_frontiers/

1/n

MarkHanson,

@jonny definitely the way we're choosing to take it ๐Ÿ™‚

MarkHanson, to random

โ‡๏ธ Explore our data! โ‡๏ธ

We were unable to release our data alongside our preprint. But we've figured out a workaround! ๐Ÿ˜€

We've now got a web app you can load to explore our data. Find out how your journal/publisher of interest looks in our dataset! Compare groups!

Customizable plots to see how publishers/journals compare. This includes publishers we didn't highlight in the preprint.

https://the-strain-on-scientific-publishing.github.io/website/posts/app_announcement/

1/n

image/png
image/png
image/png

ct_bergstrom, to random
MarkHanson,

@ct_bergstrom I still don't really understand the motivation behind this. Was the PeerJ co-op model failing?

BorisBarbour, to random

This investigation of Ranga Dias' superconductivity publications is remarkable for multiple reasons.

https://www.nature.com/articles/d41586-024-00716-2

Nobody comes out of it well, but Nature are much more transparent about the editorial process than I can ever remember. (It's a little unclear if that was spontaneous, but, if not, the frequently claimed independence of Nature News came good.)

Thread. /1

MarkHanson,

@brembs @BorisBarbour "For as long as I can remember, they've always made it quite explicit, that their editors reign supreme and reviewers only advise them - and that this goes in bnoth directions."

Isn't that how journals started, and how they're supposed to function? The role of reviewers is to advise the editor, not be the editor and make decisions for the journal.

If editors aren't supposed to make their own judgement calls, why have trained scientist experts be editors at all?

MarkHanson,

@brembs @BorisBarbour Sure, this sometimes gets you the Benveniste affairs of the world... That's what's happened here right? But that's built in to the system, which relies on good science winning out in the end. And it did that here also. So is there really a problem?

Nature's a private company. They're allowed to screw up, and we're allowed to judge the sum of their work and decide if their error rate is unacceptably high. Doing peer review is voluntary, we vote with our feet.

MarkHanson,

@BorisBarbour was in the middle of a 2nd post that maybe responds to that point :)

https://fediscience.org/@MarkHanson/112076157010161685

I've been thinking on this a lot recently... it's kinda messed up that many journals systemize the peer review recommendations in terms of "accept/reject." Like... reviewers are consulted for comments, not to do the editor's job. 1-2 whole generations of scientists has been raised with the idea that editors are just rubber stamps with little power. Is that really the way it should be?

MarkHanson,

@BorisBarbour 100% agree.

Re: "dangerous" - to who?

What sort of error rate should journals be allowed? Shouldn't we just let Nature accept the egg on their face and we all move on?

I guess if I summed my stance: science does not have a no-tolerance policy on being wrong. The issue here stems from giving undue weight to being 'published' as being 'true'.

This isn't some failure of the scientific method. As emphasized here, the scientific method doesn't end at publication.

dmacphee, to Canada
@dmacphee@mas.to avatar

Opposition to vaccination among parents grows, poll suggests

Still a lot of work to do to convince some parents of the importance and safety of Vaccination.

https://www.cbc.ca/news/health/canada-poll-vaccination-angus-reid-measles-1.7128145

MarkHanson,

@dmacphee depressingly high numbers eh?

kevinmoerman, to Wikipedia
@kevinmoerman@fosstodon.org avatar

If you need a positive background sound, listen to this:

http://listen.hatnote.com/

It is, the sound of people adding free knowledge to the public domain (Wikipedia edits).

I like it because it is a reminder that at any time, somewhere on earth, bit by bit, there is always somebody trying to improve human knowledge.

#OpenScience #Wikipedia

MarkHanson,

@kevinmoerman @alexh where's the trombone for when someone undoes a previous edit? I demand this have the chance to become ska!

Sheencr, to random

Tad windy on the walk today

MarkHanson,

@Sheencr This is a very good dog picture. 10/10 dog.

petersuber, (edited ) to random

has modified journal impact factors () in response to an "increase in both the quantity & sophistication of fraudulent behaviors."
https://clarivate.com/blog/2024-journal-citation-reports-changes-in-journal-impact-factor-category-rankings-to-enhance-transparency-and-inclusivity

It's now cultivating the false & invidious impression that journals w/o JIFs are somehow untrustworthy or fraudulent.

"We have evolved the JIF from an indicator of scholarly impact (the numerical value of the JIF)โ€ฆto an indicator of bothโ€ฆimpact & trustworthiness (having a JIF โ€“ regardless of the number)."

MarkHanson,

@petersuber it's actually crazy that they think having an abstract in English, but not necessarily an article in English, is a "quality" indicator. Like... no matter what side you fall on regarding that... that's crazy.

MarkHanson, to random

Folks like Anna Abalkina have done wonders to reveal journal hijacking. This one feels a bit personal...

I just saw an article from "Amino Acids" that had English fragments throughout the abstract, with many simple statements that were just incorrect.

What happened here?

#SciPub #AcademicChatter #ScientificPublishing #PeerReview #OpenAccess 1/n

foaylward, to science
@foaylward@genomic.social avatar

Counting citations hasn't been a reliable measure of scientific impact for a while, especially on platforms like Google Scholar that compile info from random documents. Hyper-authorship, predatory journals, etc have all contributed to the problem.

This preprint just drives home how important it is to measure scientific impact more carefully and without reliance on automated metrics

Google Scholar is manipulatable

https://arxiv.org/abs/2402.04607

MarkHanson,

@elduvelle @foaylward yeah, chiming in, in many countries researchers are incentivized or outright pressured to publish in "high impact" or other box-ticking-exercise journals, and they receive bonuses, salary raises, or positions based on their CV and what is on it.

That's not to say that's good, but it is the reality that ~half the world experiences? And these conversations from Western perspectives often discount that as "bad," but lack appreciation for what drives it despite it being "bad."

MarkHanson, to random

Last year editors at the journal Neuroimage walked out en masse & started something new. This retrospective explains why, and what a positive experience that's been โœŠ

We all should really stop doing free work for profiteers ๐Ÿคทโ€โ™‚๏ธ #AcademicSky #ScientificPublishing #SciPub

https://www.statnews.com/2024/02/01/scientific-publishing-neuroimage-editorial-board-resignation-imaging-neuroscience-open-access/

deevybee, to Pubtips
@deevybee@mastodon.social avatar

https://www.chemistryworld.com/news/review-mills-identified-as-a-new-form-of-peer-review-fraud/4018888.article
evidence of fake peer review at MDPI.
No doubt they are bot the only publisher to have this, but itโ€™s hard to see how they could meet their target of v short decision times with an uncorrupted peer review process

MarkHanson,

@deevybee it's a shame they even bother mentioning predatory reports when they could just cite the source alone (Maria).

Friendly reminder for anyone reading: the site/acct "PredatoryReports" assumed a brand and now uses it to blackmail publishers into huge payments to keep their journals off the predatory list - and that's not even mentioning the plagiarized content and general bad faith.

https://blog.cabells.com/2024/01/16/unmasking-a-predator-predatoryreports-org/amp/

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

"the challenges that science is experiencing now ... are due to a lack of emphasis on ... the hard intellectual labor of choosing, from the mass of research, those discoveries that deserve publication in a top journal"

๐Ÿค”

https://www.science.org/doi/10.1126/science.ado3040

MarkHanson,

@brembs @jonny @neuralreckoning cont: if a metric finds that "prestigious/respected" journals are indistinguishable from random journals, and even from known crap, then there's 2 interpretations:

  1. the assessed metric(s) and the sampling are valid to speak to the whole story, and there really is no diff among respected vs random journals.
  2. the assessed metric(s) or sampling is missing a key variable that distinguishes respected journals.

So, which is it? 2/3

MarkHanson,

@jonny @neuralreckoning
As food for thought, I'd be curious to look at the @brembs 2018 IF vs effect size with an IF/SJR bin applied. Why? We agree raw IF is rubbish, but disagree in the concept that citation patterns can indicate quality. I bet if Brembs 2018 data incorporated IF/SJR instead of just IF, you'd find the lowest quartile IF/SJR does waaay better than the highest quartile. & I wonder (don't know what to expect) how IF itself might fare within the lowest IF/SJR quartile subset 3/3

MarkHanson,

@jonny @neuralreckoning @brembs I guess I'm not convinced that in all those things, journals unequivocally fail. & I say that having taken in this full weekend's spirited conversation. And fun! Any disagreements were always good faith ๐Ÿ™‚

So before I head off to bed, I'll give a final word re: my position:

I like journals. I even like the journals that I hate โค๏ธ

Gonna leave it there for my own sake. Engaged during travel/wknd, but won't be able hold the convo through this week... Cheers all!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • โ€ข
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines