@neuralreckoning@neuromatch.social
@neuralreckoning@neuromatch.social avatar

neuralreckoning

@neuralreckoning@neuromatch.social

I'm a computational neuroscientist and science reformer. I'm based at Imperial College London. I like to build things and organisations, including the Brian spiking neural network simulator, Neuromatch and the SNUFA spiking neural network community.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

elduvelle, (edited ) to random
@elduvelle@neuromatch.social avatar

What’s your favourite video streaming program? I’m hesitating between prime video (amazon) and HBOMax…
Have been using Prime and it’s really annoying how even though you’re paying monthly you still have to pay on top to access the best movies / shows, or sometimes they’re not even available…

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle if only there were some other way to access this torrent of bits.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

This article was pretty brutal but I think we can't ignore some of the criticisms made of academic conferences.

https://sociologistslefttotheirowndevices.wordpress.com/2022/07/03/why-i-no-longer-attend-academic-conferences/

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

A short argument for why the big publishers cannot be part of a publishing reform effort

Science is stuck in a vicious cycle it is hard to escape from. The decision to publish a scientific paper is made based on an evaluation of its likely importance and technical correctness. Scientists are evaluated based on these publication decisions, and resources (jobs, grants and promotions) are distributed accordingly.

The current system distorts scientific priorities. Science is incredibly competitive, resources are allocated on a short term basis, and the primary metric used to evaluate scientists is their publication record. As a consequence, there is an unavoidable pressure to select problems and design studies that can lead to results that are likely to be favourably evaluated and published in the short term. This is in opposition to the long term scientific value, a fact that appears to be widely acknowledged by working scientists (https://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process).

The current system is a vicious cycle and stable equilibrium. In principle, we could choose to evaluate scientists and their work in a better way. However, no individual or small group can do this alone. If an institution chooses to hire scientists who do work that they believe will be of enduring scientific value despite being unlikely to win short term grant funding, they will take a huge financial hit. Public research is under such severe resource constraints that this is simply not feasible for most institutions even if they wished to do so. Similarly, a public funding body that makes decisions based on long term scientific value and not short term publishability is likely to be able to count fewer high profile papers in their output, and compared to other funding bodies will appear to be underperforming when they are reviewed at the government level. Individual scientists have even less flexibility than these institutions.

Journal prestige cements this problem. It is the widespread availability of an easily calculated metric based on journal prestige that makes this cycle so hard to break. If there were no such metric, different groups could try different approaches and the effect would not be so obvious in the short term. The availability of the metric forces all institutions to follow the same strategy, and makes it hard to deviate from this strategy.

The majority of big publishers commercial value rests on their journal prestige. If there were no funding implications to publishing in one journal rather than another, scientists would be free to choose based on price or features. There are widely available solutions with better features at virtually no cost. Consequently, the entire business model of these publishers would collapse without the journal prestige signal.

Big publishers therefore cannot be part of the needed reforms. The success of these reforms would untie the evaluation of the quality of scientific work from the journal it is published in, and this would destroy the business model of these publishers. They will therefore do everything in their power to resist such reform.

Divorcing from the big publishers will not be enough. Journal prestige is the cement of the current negative stable equilibrium, but eliminating that will not guarantee a globally better system. We need systems for publishing and evaluating science that is diverse and under the control of researchers. This is what we intend to do with Neuromatch Open Publishing (https://nmop.io/).

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle I've wondered that myself. I think maybe unboosting and then boosting again might put it to the top? Also high five! ✋

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle we've learnt a new Mastodon skill!

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@elduvelle if you can go for a good journal like eLife then do, but I wouldn't recommend hurting your career by turning down for profit journals yet. At this point I'm not sure it will make much difference to anyone but you and your career. At some point that calculation might change but I don't think we're there yet sadly.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@strangetruther @elduvelle I struggle with this a lot. In the end, I want to change people's minds and I think I have a way to do it without asking them to give up their best chance at a career in a hyper competitive environment, so that seems tactically better. But I absolutely acknowledge the force of your argument that thinking like that is also what lets things not change.

neuralreckoning, to random
@neuralreckoning@neuromatch.social avatar

Naive question (maybe): Is there a definition of 'computation' akin to the mathematical definition of information (entropy/MI)? I don't mean Turing machines. e.g. something that could determine the extent to which a group of neurons/synapses are signalling versus computing? #computation #computerscience #informationtheory #machinelearning #neuroscience

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • Durango
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • lostlight
  • All magazines