neuralreckoning,
@neuralreckoning@neuromatch.social avatar

"the challenges that science is experiencing now ... are due to a lack of emphasis on ... the hard intellectual labor of choosing, from the mass of research, those discoveries that deserve publication in a top journal"

🤔

https://www.science.org/doi/10.1126/science.ado3040

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
Fucking SIGH

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
I dont even know where to start with this thing. It is totally true that we should care about the health of science beyond lab work, and especially shouldnt create systems of class where experimentalists look down on everything else. But like... the entire rest of the framing is so fucked I am amazed.

This editorial could have been a vaguepost CW'd with "venting"

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
I guarantee you that public distrust in science does not stem primarily from researchers being disrespectful to Science editors.

skyglowberlin,
@skyglowberlin@vis.social avatar

@jonny @neuralreckoning I completely agree with a large part of what he says he wants (e.g. people shouldn't be assholes to editors), but text itself and the framing are just bizarre.

You've got to wonder whether he honestly believes the things he wrote, or if he was in a rush and just tried to make the framing fit?

jonny,
@jonny@neuromatch.social avatar

@skyglowberlin @neuralreckoning i definitely get the vibes of "something pissed me off and i have this platform so i'm gonna write this knowing i need to retrofit some altruistic cause like public trust in science to make it not just seem like a vaguepost about that thing that pissed me off"

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
Maybe if you didnt want to be disrespected by the academic elite you wouldnt operate a venue that directly props up the prestige hierarchy in academia 🤷‍♀️

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny yeah I also didn't know where to start so I thought just quoting it was as good as I could do...

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
I am tempted to get on hypothes.is about it

skarthik,

@jonny @neuralreckoning

That sentence started off well, and then ended up as a word salad (whut?!?). 😂​

And at the end, the editor of the biggest/most impactful journal is implying that elites like him and his institutions are victims of elites in academic institutions. Wow!

tdverstynen,
@tdverstynen@neuromatch.social avatar

@skarthik @jonny @neuralreckoning

WoN’t sOmeoNe tHinK aBoUt VaNiTy jOurNaL eDitOrs?!?

(Like those who override reviewers to publish flashy, but deeply flawed, papers because they come from the labs of well known senior professors https://retractionwatch.com/2023/06/09/how-a-now-retracted-study-got-published-in-the-first-place-leading-to-a-3-8-million-nih-grant/)

skarthik,

@tdverstynen @jonny @neuralreckoning

Yeah... the incentives are so lopsided for mostly positive studies/hypothesis.

I have been told many a times that I will have no career prospect if I publish, "everyone else is seeing phenomenon X in these experiments, but I do not in similar experiments." (or) "phenonmenon X is an artefact".

Aside from the publishing, the replication issue concerns me deeply as someone who straddles both theory and experiments. I have conducted long complicated experiments, get robust results etc., (and I am happy I see interesting effects in my data), but I always fear if the experiments and methods we engage in are becoming so complicated, not easy to record (or easy to overlook certain things one might take for granted) that replication becomes impossible.

jonny,
@jonny@neuromatch.social avatar

@skarthik
@tdverstynen @neuralreckoning
Everyones methods are too complicated to be replicated from methods sections alone. We need tooling that supports replicability through the entire lifespan of the experiment. See our work with Autopilot:
https://www.biorxiv.org/content/10.1101/807693v2
eg. In the tests section, we link out to a plugin that contains all the code for the tests: https://wiki.auto-pi-lot.com/index.php/Plugin:Autopilot_Paper
https://github.com/auto-pi-lot/plugin-paper
And not just that, but descriptions of how to do the deep methods that go beyond the code like how to hack the oscilloscope to do the measurements we need: https://wiki.auto-pi-lot.com/index.php/Rigol_DS1054Z
and how to build the behavior box: https://wiki.auto-pi-lot.com/index.php/Autopilot_Behavior_Box
Linked all the way down to the CAD with fabrication instructions: https://wiki.auto-pi-lot.com/index.php/Autopilot_Tripoke

All those designs are reusable, the wiki directly supports iteration and credit assignment for typically invisible labor. The data produced preserves all experimental provenance because the tool was designed to do that. It can make use of other researchers tasks and experiments even without having the exact hardware, again because it was designed to do that.

We can make replicability real, but it requires us to think beyond publication practices to the actual means by which we conduct experiments, but in 7 years of trying to make that happen I got only a handful of labs to join me because the trench most labs are dug into is so deep and the desire to reinvent from scratch is so strong - because actual replicability using open source tools and methods intended to be an integrative framework is so unfamiliar!

Replicability and new publishing models go hand in hand - no journal would "publish" an ongoing, publicly editable wiki of methodology shared across many papers, the entire business model is predicated on selling licenses to make singular, atomic communications artifacts that only circumstantially graze by prior work. If a journal wanted to integrate publication into experimental tooling, you better believe thats a surveillance trap. We can do it ourselves.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@neuralreckoning

The very idea of a "top journal" is so flawed, so harmful, so counterproductive, that whatever follows beyond those words in the blog post wasn't worth reading.

As the old adage says, it's hard for someone to notice something if his livelihood depends on not noticing it.

#ScientificPublishing

MarkHanson,

@neuralreckoning @brembs I'll say he has a point to his whole piece. It's just poorly made, particularly because it's unfocused. Part of the point itself is about the heuristics of evaluating science and scientists. But there is a point to journal prestige. Maybe I managed to put the the question forth a bit more... 'tastefully': https://mahansonresearch.weebly.com/blog/do-we-really-need-journals

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs I think it's fair to say we shouldn't be rude to editors because we shouldn't be rude to anyone. But also, if we feel like the job they do is harming science (as I believe it is) we shouldn't hold back from saying so. Trying to say we shouldn't criticise journals because it's rude to editors and implying this is an elitist viewpoint that's hurting public trust in science is not unfocused, it's deeply deceptive.

MarkHanson,

@neuralreckoning @brembs I guess I disagree that they're harming science. I tried to phrase this argument, hopefully more convincingly/thoughtfully, in the blog linked above.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs That's totally reasonable. I was just responding to the idea that in that article he had a point that was poorly made. (a) I don't think that he was making the argument that you're making, he's assuming it and making some other weird argument. (b) I don't think his article was hastily or poorly written, I think it's deliberately deceptive. I don't think you get to be editor in chief of one of the highest profile journals in the world without understanding what you're writing.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs I just read your blog post. If I've understood correctly your argument is that social media isn't a good replacement for journals because it has its own problematic biases built in. I agree that social media isn't sufficient, but I don't think this in any way makes the argument for journals.

The way I'd summarise the problem you're highlighting is that we need to match papers to readers. There are various properties we'd like that matching process to have. We'd like to make sure the matched papers are relevant, high quality and an unbiased selection. We might want it to be consistent and fair. We'd like to know that it doesn't miss papers that are relevant, etc. We'd like the amount of time and energy spent on the matching process to be reasonable and proportionate. We'd like the process not to have negative second order effects on scientific careers.

I think journals fail on almost every single one of these measures. The majority of papers in a given issue of a journal - even one in my area - are not relevant to me. Pre-publication peer review and the journal system doesn't ensure high quality (multiple studies show how many errors are missed by peer review), indeed it encourages authors to hide weak points of their papers and in extreme cases to commit fraud. It's highly biased in favour of well connected scientists at big name institutions in rich countries. (This also true of social media but unclear to me if it's more or less biased - haven't seen any evidence about this.) The process is highly random, inconsistent and unfair. It regularly misses important papers that are very relevant to me. The amount of time and energy spent on it are wildly disproportionate to the value of the filtering process. It has terrible effects on scientific careers, both because of the randomness, bias, effects on mental health, overwork, etc.

I would argue that we need a diversity of approaches for paper matching. Curation (a more egalitarian generalisation of what journals do) can be part of that mix, as can social media. There's also algorithmic recommendation (like semantic scholar), collaborative filtering, and arguably many more. On top of that, a fully open and transparent post-publication peer review system to enable us to find errors and judge paper quality.

MarkHanson,

@neuralreckoning @brembs great summary. And I agree your definition of the problem. I do disagree in a minor way that journals are somehow totally unfit as a heuristic of quality. Even REF scores correlate well (r > .5) with IF (DESPITE manipulation of circular-citing journals my Impact Inflation metric in my arXiv paper highlights).

More & more I find the argument "journals aren't perfect, they're not even good!" To be both true & yet not helpful.

A key agreement: pre-pub review is rubbish! 😁

MarkHanson,

@neuralreckoning @brembs I guess I'd say social media is just an example for me though. The core concept is that our information heuristics use network hubs as quality/relevance filters. Journals are just an emergent property of the need for quality information filtering: they provide one such network hub. This could certainly improve, but any replacement system will likely gravitate towards a journal editorial structure just wearing a hat and funny sunglasses.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs journals are quite definitively not an emergent property. The modern journal was deliberately manufactured to create private profit from public money and it's extraordinarily successful at that. I get most of my recommendations from a combination of semantic scholar and social media now, and this is not like a journal.

brembs,
@brembs@mastodon.social avatar

@neuralreckoning @MarkHanson

Yeah, journals are completely useless as a filter - worse than random.

If journals were an emergent property, Mastodon, X, Facebook, etc. would be full of journals. IMHO, it really does say something that nobody ever got the idea to create journals on there.

MarkHanson,

@brembs This really feels like it's too broad. Not all journals are useless or worse than random. This is a very field-specific thing.

I know journals where I don't think I've ever read a bad or overhyped article: Current Biology for instance. That's really useful for me! If I see an article is published in Curr Biol, whether I know the authors or not, I can expect it to be worth my time!

And of course, I can think of many more example journals along a scale of expectations vs. reality.

brembs,
@brembs@mastodon.social avatar
MarkHanson,

@brembs
amazing 😂

To be fair, all I was saying is I don't think I've* ever read an article in Curr Biol I didn't like. Maybe the editors for Curr Biol within my interests/field do a 'better' job?

Just goes to show how this conversation really is multi-faceted. But to be clear, I guess what that'd say to me is that I'd follow the editor of Curr Biol within the insect immunity sphere whether they were at Curr Biol or not.

Very funny that your example is even a #Drosophila paper though 😂

brembs,
@brembs@mastodon.social avatar

@MarkHanson

The main point I make when, e.g. I'm presenting all the data on journal rank:

https://www.frontiersin.org/articles/10.3389/fnhum.2018.00037/full

is to relate our own subjective experience of journals to that of horoscopes. Of course, if we only read the horoscope of our 'sign', we find it to be very accurate: we will find the love of our lves today, we are a lucky person and smart and attractive.

Similarly, due to our lack of perspective, soem journlas may look ok (or not!). That's exactly where science comes in:

MarkHanson,

@brembs yes. And I'll make this my last post on this train of thought: but despite it being a good analogy, I don't grant it as a totally valid one.

Otherwise things like Thelwall et al. wouldn't find efforts to reward quality somehow correlate well with IF.

I'm confident in my heuristics that when I read an MDPI paper, it's less likely to be useful to me than when I read a Curr Biol paper. I think you would be too? So clearly there are heuristics at play there, and I find value in them.

brembs,
@brembs@mastodon.social avatar

@MarkHanson

Sure, I also find pisces to be the most accurate horoscope for me 🤣

MarkHanson,

@brembs genuine curiousity: you really think if you read through say... OMICS (predatory group served cease and desist by US gov), you won't find any difference in quality compared to reading PLOS? Or say... the average IJMS paper vs the average eLife paper? You really believe that both groups publish quality studies at the same frequency, and it's all just elitism or some other internal bias that makes me think the average quality of the papers in those journals is different?

albertcardona,
@albertcardona@mathstodon.xyz avatar

@MarkHanson @brembs

A crucial point here is that you are averaging across papers published in a journal. Just like the "impact factor" does. I read papers broadly, couldn't care less where they are published. I use my own judgement, not the proverbial book covers. And such covers do not factor into my expectations.

What's critical is finding relevant papers. And journals don't address this at all.

MarkHanson,

@albertcardona well of course, but only as a heuristic. It's not even that I'm more likely to be enthusiastic about reading a paper from a recognized source. It's that I'm less likely to read a paper from a source that's repeatedly failed me to be relevant.

This comes in the form of quality control. ex: MDPI journals have ridiculously lax peer review (strain paper hopefully shows quantitatively), and MDPI is where colleagues I know send their "well this can't get in anywhere else so..." 1/2

MarkHanson,

@albertcardona so it's come by honestly.

I think the relevant papers point is correct. But all papers are not equally relevant just because they have the right keywords etc... We apply various heuristics. I'm more likely to read a paper if:

a) I previously read a paper from the authors I liked
b) The topic is directly applicable, and after a glance-through it piqued my interest
c) The paper is attracting attention & topic interesting

Those are non-randomly corr with journal choice IMO. 2/2

albertcardona,
@albertcardona@mathstodon.xyz avatar

@MarkHanson

Frankly, most papers I read in depth are preprints: they aren't peer reviewed or published. If I can't review the paper myself by reading it in depth there's not much point in me reading it–I couldn't tell what's right or wrong. To read established facts that I can't evaluate by myself I pick up undergraduate textbooks.

So the journal signal isn't even there to begin with, except for old papers. And these only surface via some form of recommendation, be it a citation in a paper, an email from a colleague or around here. So their host journal is also irrelevant.

Also, on papers perceived as less valuable: why bother writing them? Much less paying a journal to get them out? If it's to collect academic points towards career progression or grants, one could argue those add negative value.

MarkHanson,

@albertcardona believe it or not I agree. I admittedly find myself mostly reading preprints. Yet it's precisely that reason I say they're corr with journal choice: it's often apparent what "tier" of journals the authors will try. Thing is, I don't think "tier" is captured by any metric: it's an insider's take on quality*impact.

& I fully agree: we should stop requiring all work be reviewed & "published." The academic points system (that does exist) is what led to the Strain on Publishing paper.

MarkHanson,

@albertcardona I guess to the "tier" point: journals also reflect heuristics of what the author believes of their work. Some might be known for rigour but less emphasis on impact. Others, vice versa. Few genuinely have both. But that's part of the game theory of this all: authors choose journals to signal to others if this is a paper they should prioritize, and why.

Any "solution to publishing" needs to appreciate the historical reasons for this game theory, & fulfill the needs of the players.

albertcardona, (edited )
@albertcardona@mathstodon.xyz avatar

@MarkHanson

The solution to publishing starts by stopping to use published papers as evaluation tokens, and by having academics evaluate academics on the basis of work done, not journal name. Removing journals from the academic evaluation problem is tantamount to removing the incentives that led to so much nonsense, including fraud, corruption, and more. It also means evaluation will be slower and require actual expertise – and these are both good things. Science is not in a hurry.

#ScientificPublishing #academia

MarkHanson,

@albertcardona this is very strong-link thinking and I'm here for it!

The challenge is not only to convince other scientists en masse (an incredibly tall order), but also the institutes and funding agencies (somehow, an even taller order exists). And I think most of my devil's advocacy simply stems from this practicality limit. So my goal is to try and avoid letting perfect become the enemy of good as we strive for these changes. 🙂

albertcardona, (edited )
@albertcardona@mathstodon.xyz avatar

@MarkHanson

Convincing whole institutes and funding charities is easier, far easier, than changing scientists at large. And they are as good a fulcrum as we'll ever get because they have a handle on the reality distortion field device we call funding, and the decision-making process and executive power lays within very few individuals who, on top of it, are motivated to make a mark for the greater good as the means to make a mark for themselves. Human nature can be aligned to get good deeds done.

ryanmaloney,

@MarkHanson @brembs The tell for me is that within most labs, there is clearly a relation between where the papers go and how successful/impactful a project was. There’s clearly biases in institutions, ability to spin a story, networks etc., and most labs have there “this paper got unfairly rejected” tale, but it’s not like every glam lab has nonstop glam papers and when you control by lab the journal prestige really is a strong signal of how much a paper advances the field.

brembs,
@brembs@mastodon.social avatar

@ryanmaloney @MarkHanson

Care to share the evidence this assertion is based upon?

MarkHanson,

@brembs @ryanmaloney who were you replying to and which part?

brembs,
@brembs@mastodon.social avatar

@MarkHanson @ryanmaloney

Sorry:

"when you control by lab the journal prestige really is a strong signal of how much a paper advances the field."

ryanmaloney,

@brembs @MarkHanson Entirely personal anecdote throughout my scientific career of labs that I was in or payed close attention to all of their output, across multiple fields. Supported by my experience that most labs non-randomly select journals to submit to, tend to prefer journals they perceive as better assuming chance of successful review, and that journals use editorial judgement on impact to refer papers to “lower tier” journals.

brembs,
@brembs@mastodon.social avatar

@ryanmaloney @MarkHanson

Well, if it's just a personal opnion, I can say that for my lab, there is no correlation (perhaps a weak inverse one) between those of my papers that have advanced my understanding the most and the rank of the journals I published them in.

So it's a tie 🤣

brembs,
@brembs@mastodon.social avatar

@ryanmaloney @MarkHanson

Seriously, though, you point to an, IMHO, crucial point: any stratification we may subjectively perceive in our resepctive fields is, let me be bold, 99% a result of our own submission strategies and at most .5% due to anything that happens after we submit.

And if this sort of thing is something the community values, then it is trivial to accomplish with a journal replacement.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs agree that journals are not totally unfit as a measure of quality. But we also know it's not great. BMJ did a controlled study with deliberate major and minor errors introduced and found that peer review caught on average less than a third of the major errors.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2586872/

It's hard for me to have an idea of how much better journals are as a quality filter compared to just having a quick flick through and doing a few sniff tests, but I'd be surprised if the effort and bias it introduces is sufficient to justify it.

brembs,
@brembs@mastodon.social avatar

@neuralreckoning @MarkHanson

I think you phrased it exactly right earlier, Dan: whatever it is we think journal peer-review does well, the cost/benefit ratio is just abysmal.

MarkHanson,

@brembs @neuralreckoning Exactly. I think I'm only a maverick here (and specifically... a 'maverick' towards Mastodon's Open Science community's sensibilities), in that I fear the road to hell is paved with good intentions.

Calling journals bad for science is easy, because it's true. Replacing them is hard, because almost all replacements propose to ditch editors and reviewers and instead tell everyone to just do those jobs themselves, as if they have both the time and expertise to succeed.

brembs,
@brembs@mastodon.social avatar

@MarkHanson @neuralreckoning

If I understood correctly, then this is a variant of a comment I have been hearing a lot in the last 15 years!

So just to make something completely clear: "replacement" means replacing journals with something superior. I'm not aware of anybody ever proposing to replace journals with something worse.

The bar to improve on journals is so ridiculously low by now, that it's almost impossible for any modern replacement to do worse.

jonny,
@jonny@neuromatch.social avatar

@brembs
@MarkHanson @neuralreckoning
Ya exactly - most sincere arguments for replacing journals argue for more review, usually as an open, continuous process over the lifespan of the work rather than a single 3 person review. Ditching journals == ditching review is a strawman I wish we would move past

jonny,
@jonny@neuromatch.social avatar

@brembs
@MarkHanson @neuralreckoning
As far as the role of social media, its deeper than discovery - part of the problem is seeing review as some special rarified process, but review happens literally all the time in various shades of context, depth, and perspective. Each adds richness, I guarantee that in this very thread, the papers discussed were discussed in some new way and put in context with some other new works that they havent been before. Yes! They are biased! As are all reviews! So why not collect them all and let a reader think for themselves. Review is only "expensive" because we imagine it happening in the traditional labor-intensive process, but it should run as freely as conversation. We just need mediums that support that conversation building to something cumulative

https://jon-e.net/infrastructure/#%E2%80%9Cpeer-reviewed-vs-unrefereed%E2%80%9D-is-purposely-excluded-as-an-axis-o

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning Dan, see footnote 66 there lol

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny immortalised! 😂

jonny,
@jonny@neuromatch.social avatar

@brembs
@MarkHanson @neuralreckoning
If you disagree with the argument I make in that paragraph, you can just highlight it and write about why you disagree directly on the document and it will be visible for everyone. Amazing this whole web technology thing and how it could be used for continuous, public review.

jonny,
@jonny@neuromatch.social avatar

@brembs
@MarkHanson @neuralreckoning

Even things as simple as being able to link to a specific paragraph are novelties to the traditional publication system. In a more recent work I replaced footnotes with aligned sidenotes and added rich metadata in the page. Both of those are also traditional preprints, but their web forms have received more and better review than I would in a traditional journal from domain experts in disparate fields that certainly would not have been gathered in a single review panel.

as björn says above, the bar is just so ridiculously low, and fear over "what do we do about peer review" is an unnecessary blocker to experimentation. Just self publish and see what happens. You can do it alongside journal submission - any journal that tells you you cant isnt worth submitting to. Give the T&P committees visitation and inbound link stats in addition to citation counts and dare them to say thats an inferior metric. The only thing holding us back is us!

brembs,
@brembs@mastodon.social avatar

@jonny @MarkHanson @neuralreckoning

Precisely!
Any journal replacement today would have peer-review already built in suczh that the scholarly community can develop ways of reviewing that are efficient and do not overwhelm the community.

This also encapsulates the bis misunderstanding that people still have about social media: social media is something qualitatively different from broadcasting.

One tell-tale sign of this fundamental misunderstanding is the following:

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning

The one aspect of social media mentioned both by @MarkHanson and Holden Thorp - follower counts - is not only the least relevant, it also reveals the mindset of those who mention them. Broadcast syndication has been around since the 1940s:
https://en.wikipedia.org/wiki/Broadcast_syndication
So counting follower is something very old and not particular to what we call social technlogies.

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning @MarkHanson

Importantly, thinking that high follower counts say anything about the quality of research is falling into the same trap as thinking citations tell you anything about the quality of the research.

Or does anyone here really think "Avatar" is the best movie ever made and that it is about 50% better than "Star Wars: The Force Awakens"?

https://en.wikipedia.org/wiki/List_of_highest-grossing_films

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning @MarkHanson

The potential for social technologies to be a game changer in science is not the broadcasting aspect. Traditional media, by and large, do this just fine, IMHO.

What is particular to social media is that it is two-way and NOT just one-way broadcasting - even though many traditional organizations still treat social media like just another broadacstng channel.

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning @MarkHanson

Social technologies, if leveraged intelligently in scholarship, support the main function of what journals were meant to serve: discussion, criticism, review, progress.

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning @MarkHanson

In his comment, @albertcardona hinted at it: A journal article is not the end of a research process, it is the beginning of one! All an article is, is a "look what we found!". Using journal articles as evaluation tokens misses the more important apsect: the back-and-forth within the community where each discovery is placed into the larger body of work.

Social technologies have this back-and-forth built into them, which is why we consider them so central:

brembs,
@brembs@mastodon.social avatar

@jonny @neuralreckoning @MarkHanson @albertcardona

https://royalsocietypublishing.org/doi/10.1098/rsos.230207

Maybe it is quite telling about the colective state of schoalrship that so few of our societies have understood that their name shares the same root with social technologies for a reason?

jonny,
@jonny@neuromatch.social avatar

@brembs
couldnt agree more, clearly ;)

MarkHanson,

@brembs @jonny @neuralreckoning in my defence, I never imagined this social media point would be taken so literally. It doesn't matter if it's bioRxiv or folks handing papers out on the street. My point was about how society has information flow travel through networks, incl. higher potential reach at network hubs. Journals are simply an emergent property of science to provide network hubs. If journals disappear, informal journals will emerge and work back towards being a journal-esque product.

jonny,
@jonny@neuromatch.social avatar

@MarkHanson
@brembs @neuralreckoning
Yes, sorry, didnt intend to make you the opponent here. If what you mean by journal-esque product is some concentration of attention under some name, then yes youre probably right! I guess it would be helpful to have some boundaries around the concept of "journal" then, bc to me that is a relatively circumscribed thing that refers to an integration of review, venue, metadata provider, web host, etc. Another way of thinking about abolishing journals as we know them is a substantial decoupling of those functions, even if there would still be, yes, recognizable points of collection.

albertcardona,
@albertcardona@mathstodon.xyz avatar

@MarkHanson @brembs @jonny @neuralreckoning

In my view, journals are a response to the constraints for distribution: sending letters is expensive, so staple together a few articles and snail mail them as an "issue" in a single letter.

With the internet, individual papers no longer need to be shepherded together, for the distribution channel – the web, or email – offers the necessary granularity. Even offers two-way and many-to-many communication like never before.

If journals were to disappear something else entirely would replace them. Conditions have changed.

#ScientificPublishing

ryanmaloney,

@albertcardona @MarkHanson @brembs @jonny @neuralreckoning
I think the theoretical promise of a journal isn’t any sort of collection of relevant papers, but rather a brand/coherent editorial stance/reputation for review quality. While in practice this is mostly just prestige, there are outliers (e.g. Plos 1, eLife) that try and be entrepreneurial, and there are some journals that do have a reputation of being more stringent on the sound science and looser on the glam (e.g. J. Neurophys).

jonny,
@jonny@neuromatch.social avatar

@ryanmaloney
@albertcardona @MarkHanson @brembs @neuralreckoning
Yes I think this is how the journals would like us to see them, as brands indicative of a certain review quality (with prestige a hushed and guilty "everyone knows" kind of background hum).

I think we have done a bad job adapting how we talk about journals to the new economic reality of what they actually are, though - increasingly few journals operate solely as journals, and instead operate as one profit vector in a larger megalith of surveillance driven data brokerages. Evaluating journals merely by the quality of their output works is both anachronistic and misses more important considerations that directly impact their function as publishers. Does the rigor of the review matter when publication in a journal is coproductive with a system of research intelligence products that directly determine grant funding and tenure? That fuel an archipelago of derivative products where rent is extracted from the captive fruit of public inquiry? When that business model directly feeds back not only onto which work they accept, but the form of those works as isolated, rarified, prestige tokens, are we not missing the much bigger picture by evaluating them as otherwise equal choices varying only in review quality and topic?

ryanmaloney,

@albertcardona @MarkHanson @brembs @jonny @neuralreckoning In as far as a journal is just a name for a group that processes other people’s ideas, I think that a lot of innovations could be done under the rubric of a journal, especially if it does peer review in a way that checks a box for orgs that care about that. I think eLife has been doing great work experimenting, and I’d love to see more journals try more radical ideas with regards to technologies and editorial philosophy.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@ryanmaloney @albertcardona @MarkHanson @brembs @jonny sure in principle a journal that was willing to be sufficiently radical could be innovative in a good way, and arguably eLife was/is that, but I don't see much sign of it and a huge amount of resistance and push back against eLife for example. I also don't think big publishers can be part of any reform movement (commercial or society if they're big money makers for the society or run by big publishers on behalf of the society). More on that here: https://thesamovar.github.io/zavarka/why-publishers-cannot-be-part-of-reform/

ryanmaloney,

@neuralreckoning @albertcardona @MarkHanson @brembs @jonny I see the eLife pushback as less a sign that journals are unreformable as much as that the problem is as much or more on the demand side—many (most?) scientists want, for egotistical and pragmatic reasons, an easy proxy they can show to the funder/dean/search committee that their research is recognized independently as being good, and those same orgs are happy for a pithy proxy that’s even modestly correlated with impact or productivity.

ryanmaloney,

@neuralreckoning @albertcardona @MarkHanson @brembs @jonny And similarly, I’m worried that misidentifying journals as the cause rather than a symptom means that pedigree/recommender networking/scientific prizes/previous grant record will just inflate to cover the signal that publishing record gives now, and I think most of those are more problematic than publication record for evaluating scientists.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@ryanmaloney @albertcardona @MarkHanson @brembs @jonny journals aren't the only problem, but they're part of it. They're also partly the means by which those problematic things you mentioned exert influence today. But yeah I agree we have to be careful that whatever changes we make aren't even worse. If we let the reforms be led by powerful interests, commercial or otherwise, that's a much more likely outcome.

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning @ryanmaloney @albertcardona @MarkHanson @brembs symptom/cause assignments are always hard, and in this case we don't have to choose - review and evaluation systems are propped up by journals, journals are propped up by review and evaluation systems. they are coproductive with one another. To say that journals are merely a symptom of these review and evaluation systems is to miss the many-billion dollar a year industry that aggressively manipulates scientific practice at multiple scales and with multiple mechanisms from government lobbying, funding agency integration, industry capture, and so on to ensure that systems of review still strongly favor the profitable publishing regime du jour. To say that review and evaluation systems are merely a symptom of the journal system is to also let the academics that benefit to the tune of their entire careers from the stagnation of our communication systems off the hook. We can and should try and address both simultaneously because they are intrinsically linked to one another. Not appreciating for-profit publishers as being actively adversarial is, as Dan says above, the surest recipe that we will be suckered into a worse system: see the way that the "Open access movement" was permuted into an APC-driven system more profitable than subscriptions, and the next wave of exactly what you're describing as algorithmic recommendation and evaluation systems is exactly the one that the publishers are pushing for next.

MarkHanson,

@brembs @jonny @neuralreckoning for instance: I don't ever look through biorXiv directly. I follow @flypapers etc. as filters for me. And while I love @flypapers, it's not a journal or a colleague. As a heuristic, it's a low-investment low-SNR feed of info. It's no match for a low-investment higher-SNR info feed (e.g. trusted journal, colleague) that I find value in.

And at the core of this discussion is whether journals/info hubs etc. offer value. And next post, there is a crucial point 1/3

MarkHanson,

@brembs @jonny @neuralreckoning cont: if a metric finds that "prestigious/respected" journals are indistinguishable from random journals, and even from known crap, then there's 2 interpretations:

  1. the assessed metric(s) and the sampling are valid to speak to the whole story, and there really is no diff among respected vs random journals.
  2. the assessed metric(s) or sampling is missing a key variable that distinguishes respected journals.

So, which is it? 2/3

MarkHanson,

@jonny @neuralreckoning
As food for thought, I'd be curious to look at the @brembs 2018 IF vs effect size with an IF/SJR bin applied. Why? We agree raw IF is rubbish, but disagree in the concept that citation patterns can indicate quality. I bet if Brembs 2018 data incorporated IF/SJR instead of just IF, you'd find the lowest quartile IF/SJR does waaay better than the highest quartile. & I wonder (don't know what to expect) how IF itself might fare within the lowest IF/SJR quartile subset 3/3

jonny,
@jonny@neuromatch.social avatar

@MarkHanson
@neuralreckoning @brembs
So again I definitely dont disagree that venues and mediums are inevitable and useful, but then the remainder sort of veers in a different direction - even if there is some meaningful distinction between journals, as in traditional academic journals, that doesnt really bear on the question of whether or not journals, as in traditional academic journals, are on balance good or should exist. Similar to "broken clock right twice a day," the usefulness of any grouping mechanism doesnt justify a particular grouping mechanism.

Both things can be true: the metrics and the journal placement can be meaningless. Seeking clarity in metrics to me signals lack of clarity in what we're actually looking for. We are not merely looking for venues that can differentiate work along some metric axis - we are looking for ethical venues that dont exploit and control academic labor. We are looking for functional venues where we dont have to read tea leaves to know whether or not theyre organizing work. We are looking for accessible venues that can both be read and be contributed to by whomever needs them. In those metrics, journals as such unequivocally fail, and those are the more important metrics to me.

MarkHanson,

@jonny @neuralreckoning @brembs I guess I'm not convinced that in all those things, journals unequivocally fail. & I say that having taken in this full weekend's spirited conversation. And fun! Any disagreements were always good faith 🙂

So before I head off to bed, I'll give a final word re: my position:

I like journals. I even like the journals that I hate ❤️

Gonna leave it there for my own sake. Engaged during travel/wknd, but won't be able hold the convo through this week... Cheers all!

jonny,
@jonny@neuromatch.social avatar

@MarkHanson
@neuralreckoning @brembs
Thats a fair point to agree to disagree on then ;) have a lovely week

brembs,
@brembs@mastodon.social avatar

@MarkHanson @jonny @neuralreckoning

But the evidence is quite clear that IF quite a good tool to assess 'prestige', I cited the literature earlier in this discussion. One of the few things IF really does seem to cpature well.

Which, in turn, means that any other ranking that significantly deviates from IF likely does not capture prestige equally well.

This is where it would get interesting: low-prestige journals consistently outperforming high-prestige journals due to new rankings.

brembs,
@brembs@mastodon.social avatar

@MarkHanson @jonny @neuralreckoning

I'm not a prophet, but I'll be bold today and predict that any ranking that does not have CNS at the top will simply be dismissed, no matter how scientifically valid it is.

🤣

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@brembs @MarkHanson @jonny I remember reading an article written by one of the statisticians employed to create the first version of one of those university rankings (don't remember which one). They built this careful model and showed the results to the boss who said it was no good because Harvard wasn't in first place, and told them to go back and do it again. Wish I could find that article.

brembs,
@brembs@mastodon.social avatar

@neuralreckoning @MarkHanson @jonny

Oh, that article would be worth gold! This is precsiely how I imagine all of these rankiings work: to lend a pseudoscientific air of evidence-smell to existing figments of imagination 😆

mstimberg,
@mstimberg@neuromatch.social avatar

@brembs @neuralreckoning @MarkHanson @jonny It's not exactly what Dan remembers (and probably a different article), but this article has a number of quotes going into that direction (re U.S. News university ranking), e.g.:

> That is one of the most distinctive features of the U.S. News methodology. Both its college rankings and its law-school rankings reward schools for devoting lots of financial resources to educating their students, but not for being affordable. Why? Morse admitted that there was no formal reason for that position. It was just a feeling. “We’re not saying that we’re measuring educational outcomes,” he explained. “We’re not saying we’re social scientists, or we’re subjecting our rankings to some peer-review process. We’re just saying we’ve made this judgment. We’re saying we’ve interviewed a lot of experts, we’ve developed these academic indicators, and we think these measures measure quality schools.”

https://www.newyorker.com/magazine/2011/02/14/the-order-of-things

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@MarkHanson @brembs @jonny journals are not just a network hub and a network hub doesn't have to be like a journal. A recommender system for example is logically a hub but nothing like a journal. This idea that journals are natural or inevitable seems really dangerous to me.

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
@MarkHanson @brembs
While (at the risk of repeating myself) some venue, of which recommendation systems and journals are examples, are likely inevitable, the journal form's inevitability is ofc a self fulfilling prophecy. Eg. These conversations often spiral towards "what kind of journal do we want" rather than "what would serve our needs?" So agreed - as long as all we imagine are journals, journals are all we'll get. If we take the dangerous step of splitting apart the needs currently served by journals, daring to include unmet needs we see as natural precisely because of the configuration of traditional journals like continuity between papers, data, and experimental tooling, and dreaming larger about what scholarly work could look like instead of picking among the scraps we've been given, things get more interesting.

We have of course been down this line of thought many times together :)

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @MarkHanson @brembs you may have said it before but I like the tune so keep on singing it. 😀

timelfen,

@jonny @neuralreckoning @MarkHanson @brembs

I like your take on this, & it’s not far off what I ask the scholarly communities I consult w/: “Why are you involved in publishing? What is it you want or expect a publication to do (& for who specifically)?”

These kinds of questions need to be answered before we get into the weeds of what a venue might look like & options for achieving it within existing constraints.

timelfen,

@jonny @neuralreckoning @MarkHanson @brembs

My question for you in this convo is: What if, after thoughtful deliberation, the answer is “We think this kind of journal (or journal-like-thing) would serve our community well”? Should we treat these poor souls as delusional, captured, or insufficiently imaginative. Or is this answer within the realm of possibility (for us)? How would we recognize if/when this is a good answer?

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@timelfen @jonny @MarkHanson @brembs this is a great and important question. My first thought, despite being no Marxist, is Marx's famous quote about philosophers having sought to interpret the world, but the point is to change it. The aim is not just to understand and serve the existing needs of communities better, it's to fundamentally change the way research (and publishing as a part of that) is done. Of course, you have to work with existing needs as part of that, but serving those existing needs is not the goal. That's not the same as saying that existing communities are full of poor, delusional souls. I'm not trying to be manipulative about this, I'm open and explicit about my goals here (unlike commercial publishers for example).

brembs,
@brembs@mastodon.social avatar

@neuralreckoning @timelfen @jonny @MarkHanson

Yes, pretty much what Dan said.

Ideally, one would develop something with the goal being that the scholarly community would react to testing it with "wow, I had no idea we needed this, but now I can't imagine working in any other way!"

jonny,
@jonny@neuromatch.social avatar

@brembs
@neuralreckoning @timelfen
I take the approach that a lot of what we need from a new kind of journal-like-thing doesnt really look like what we think of publishing looking like at all, and that a lot of the basic communicative patterns already exist just in the margins. So integrating with existing practice, supporting it, supplementing it with things that turn a hard skate across 10 platforms and 20 tools into something fluid, we find ourselves in a place like "wait why do we need journals again?"

Very much not making a thing from my dreams and being like "you are a fool for not liking this," but sticking low to the ground and working on problems that people actually have. Not everyone has to like the same thing and thats the point! Making a space where that heterogeneity can exist without tension, a journal like thing or not.

The main point where I get polemical is that theres no place for obscene profit taking, so trying to figure out ways to shortcircuit the tethers that bind us to that.

jsdodge,

@jonny
Are you aware of a way to do that with arXiv preprints? AFAIK they don’t collect visitation statistics themselves

jonny,
@jonny@neuromatch.social avatar

@jsdodge
Nope! Though there could be a way im not aware of. Try self publishing and you can do that if you want to!

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning I think this is very idealistic, but: (a) many scientists don't feel comfortable commenting negatively in public on their colleagues' work, (b) especially early career or members of under-represented communities, (c) making good quality and constructive comments and reviews is a learned skill (thinking about how all scientists are assumed to be adequate teachers), ...

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning ... (d) many comments are not written or even articulated aloud, (e) aggregation disrupts the valuable process of gradual discovery. Not saying peer-review is any better at handling any of this, but I think it goes well beyond simply posting, annotating and aggregating content from as an engineered system.

jonny,
@jonny@neuromatch.social avatar

@zackbatist @brembs @MarkHanson @neuralreckoning so ya this is not my endgame for sure, obviously it is more complicated. i was presenting this as an example of how easy it is to just start experimenting and see what happens. there is no reason to only do traditional journal publication and not try anything else, despite the rest of the surrounding problems.

a+b) that's fine, anonymity is important as part of generally supporting polyvalent communication. i certainly would not advocate for everyone always needing to use their real name in any context. I don't see how this is an argument against pseudoanonymous public review when the alternative is often that marginalized scholars are not even invited to be part of the anonymous 3-person panel at the high prestige journal. open review doesn't need to look like an unmoderated, real name annotation, but can take a lot of forms to meet different needs. reviewer co-ops that self-govern their own norms of contribution to meet the needs of the particular communities they serve without needing to compete in a prestige game since the review system is also a venue would be a welcome change to journal review (and already are starting to exist, if not yet an organized movement!)

c) of course! communication is always hard. closed peer review does not train us to communicate directly and constructively with one another, open review can and usually does - it's an entirely different reviewing occasion when you are reviewing something publicly, the reviews that I have been part of are substantially more constructive (importantly, without sacrificing rigor!) precisely because they are public and you're communicating directly with someone, rather than communicating to a document as you slash it apart with red edit marks.

d) that's totally fine. people are and will remain able to speak aloud or out of medium about whatever they want to in whatever world we end up in. i certainly do not want to create a totalizing system of information where everything needs to fit and be entered into The Schema

e) i'm not sure what you mean by this one, public review is sort of orthogonal to the question of discovery for me, and i'm not sure what you mean by aggregation in this context?

jonny,
@jonny@neuromatch.social avatar

@zackbatist @brembs @MarkHanson @neuralreckoning I'd push back a little bit on open review already being idealistic. I would argue that is is eminently practical and practiceable - look! it's right there on the web, and you can do it too. I don't and never will claim that these little experiments will "solve the world," but I do think that believing that improving our circumstances is always hopelessly idealistic because we can list off 100 countervailing forces is a powerful demotivator to trying anything at all. We can do it! part of doing it is believing we can do it! part of believing we can do it is not believing we can't do it!

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning Regarding anonymity, whenever I'm open reviewing it feels like a lot of work (ties in with your response to c), so signing anonymously as an early-career feels like a waste of effort and is dissuasive. Regarding c, I think the idea of legitimate venues is really important, I give different energy in a casual reply than in a formal review. These are different things, to me anyway

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning I completely agree with heterogeneity in academic comment, and that pub talk is really important, despite and even because of the privacy afforded to it

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning I see a lot of open science maximalism that emphasizes openness and transparency at all costs (only open science is good science -- insulting and ignorant of the many ways people do research), which sees science as a model to optimize by reducing friction, formalizing connections, making everything discretely organized according to The Schema and I was wrong to lump you with that

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning That being said, I fear that the metadata associated with text-based comments on the web will (and already is) tiering academic commentary. If it doesn't have a doi or permalink, it can't be cited (which leads to another rant about permanence of ideas presented on the web and understood to be formal records, and how this relates to the ability to change one's mind over time, even within the span of a long mastodon thread)

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning Regarding aggregation, I'm thinking about information searching behaviour. Scientists develop ideas over time, not just by being presented with a bunch of prior facts. They come to know prior work and then later learn its shortcomings or relation with other work. It's a very personal and social process and I think getting your advisor or friend to share links and discuss is +

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning - more important than getting a feed of comments to read. I realize that this is about providing the optional ability to do this, rather than prescription of how things should be done, but I see a lot of evangelical "workflow optimization" stuff in open science rhetoric that kinda smells like a worrisome connection

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning and I don't mean to diss being idealistic as in optimistic. I meant in terms of imagining an abstract ideal in relation to how people approach things pragmatically. I think a lot of open science infrastructure is more closely aligned with managerialism than actual scientific practice, an engineer's view of science as a system. If only we can reduce friction and optimize the machine. But I think friction is actually really good and valuable and human

zackbatist,

@jonny @brembs @MarkHanson @neuralreckoning And sorry for breaking this up into so many posts. I'm kinda jealous of your ability to make really long ones :-) gotta have a word with my admin about that

jonny,
@jonny@neuromatch.social avatar

@zackbatist
@brembs @MarkHanson @neuralreckoning
Im packing rn so need to not be on phone so much but let me just say we're pretty much on exactly the same page here ESPECIALLY the managerialism and disconnect between practice and "perfection" in the ideology of not just open science but computing in general. This whole piece is a criticism of that family of ideologies and their consequences: https://jon-e.net/surveillance-graphs/

jonny,
@jonny@neuromatch.social avatar

@zackbatist @brembs @MarkHanson @neuralreckoning (lengthy quote from David Graeber's wonderful The Utopia of Rules here: https://jon-e.net/surveillance-graphs/#more-important-than-the-outcomes-of-these-projects-in-particular

"The increasing interpenetration of government, university, and private firms has led all parties to adopt language, sensibilities, and organizational forms that originated in the corporate world. While this might have helped somewhat in speeding up the creation of immediately marketable products — as this is what corporate bureaucracies are designed to do — in terms of fostering original research, the results have been catastrophic. […]

A timid, bureaucratic spirit has come to suffuse every aspect of intellectual life. More often than not, it comes cloaked in a language of creativity, initiative, and entrepreneurialism. But the language is meaningless. The sort of thinkers most likely to come up with new conceptual breakthroughs are the least likely to receive funding, and if, somehow, breakthroughs nonetheless occur, they will almost certainly never find anyone willing to follow up on the most daring implications. […]

This is what I mean by”bureaucratic technologies”: administrative imperatives have become not the means, but the end of technological development."

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @zackbatist @brembs @MarkHanson oh man I really need to get reading David Graeber - everything I ever hear about him makes me think I've been missing out.

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
@zackbatist @brembs
Undisputedly one of my favorite writers. You'll love him from page one

skarthik,

Second @jonny 's views on Graeber.

Even before reading his magisterial books (especially Debt), you can start here @neuralreckoning which is apposite about the current state of academia and innovation:

"There was a time when academia was society’s refuge for the eccentric, brilliant, and impractical. No longer. It is now the domain of professional self-marketers. As a result, in one of the most bizarre fits of social self-destructiveness in history, we seem to have decided we have no place for our eccentric, brilliant, and impractical citizens. Most languish in their mothers’ basements, at best making the occasional, acute intervention on the Internet."

https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit

That article gives reasons as to why "disruptive science" has declined, and Nature doesn't seem to know why?

https://www.nature.com/articles/d41586-022-04577-5

@zackbatist @brembs

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @zackbatist @brembs bought "The Utopia of Rules" and will start reading asap. 😊

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @zackbatist @brembs well I'm not even finished with the introduction yet but this is a fantastic read indeed!

jonny,
@jonny@neuromatch.social avatar

@neuralreckoning
@zackbatist @brembs
He's just an extremely good writer writing about extremely good things is all. Wish I could have met him

elduvelle,
@elduvelle@neuromatch.social avatar

@neuralreckoning … and then he goes on to criticize scientists because they’re elitists 🤣

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • megavids
  • lostlight
  • All magazines