pierre_bellec

@pierre_bellec@neuromatch.social

Collaboration not competition | cognitive neuroscience and AI | Neuroimaging | breeder of (artificial) brains at CNeuroMod | BrainHack enthusiast | Prof in psychology university of Montreal | Trans rights are human rights | they/them

This profile is from a federated server and may be incomplete. Browse more on the original instance.

pierre_bellec, to random

New preprint alert! https://arxiv.org/abs/2403.19421 Sana looked at generating brain encoding maps at voxel level resolution using @sklearn 's ridge regression.

She found the default methods available do not scale to the size and resolution of @cneuromod 's friends dataset: about 30 hours of fMRI per subject (~70k samples) at high spatial resolution (~2 mm isotropic voxels, ~260k brain targets) to be predicted frm ~16k latent features of VGG16, for a total of ~4B regression parameters.

She tried a slight modification of sklearn's parallelization strategy, simply distributing batches of brain targets across multiple CPUs. This scales very effectively with the number of threads & cpus.

Full brain and voxel level encoding maps are included for the first time in a cneuromod publication 🎉

pierre_bellec, to random

A todo list for procrastinators. I've struggled with procrastination forever, sometimes severely. Over the years I've developed a system which works for me in order to get things done. I wrote this system down first for myself, but I figured some fellow procrastinators may find it useful as well.

Part 1 - Gentle introduction https://pbellec.github.io/todo-procrastinator/intro.html
Part 2 - Some background https://pbellec.github.io/todo-procrastinator/origins_todo.html
Part 3 - The actual "Todo list for procrastinators" https://pbellec.github.io/todo-procrastinator/how_todo_procrastinators.html
Part 4 - Some thoughts on the origins of procrastination https://pbellec.github.io/todo-procrastinator/origins_procrastinators.html

Note that the system can be implemented with any software to manage todo lists. I am personally using @obsidian which I absolutely love.

As a bonus I've added an illustration representing the character Madeline from the game Celeste, together with her inner self. I refer to that duo a lot in Part 4. This illustration is a Fan-art by GomiGomiPomi reproduced with permission from the author (Usage of my drawings is allowed as long as proper credits are given and it’s not for commercial purposes).

pierre_bellec,

@jonny Thanks a lot for the encouragements 😊​ The mastodon community is very welcoming and diverse indeed! I only recently heard about tulpas which seem to be a particularly strong manifestation of "spirit animal". If you have any specific resource on plural people I am very interested, my DMs are open.

pierre_bellec,

@elduvelle @manisha hehe I use mastodon as a procrastination tool, that's why 😅​

pierre_bellec,

@elduvelle @obsidian Thanks a lot for sharing!! Yes this idea of inner self / id / gratification monkey comes in many forms of sophistication. To its extreme, people with tulpas create an almost independent mind inside their head through meditation. I am not sure how much connection there is between all these ideas but there is definitely a common theme.

deevybee, to statistics
@deevybee@mastodon.social avatar

Need statistical help!
Looking at reported test results in table of 60 x 5 variables. My q is whether N reported as significant is higher than expected by chance. It isn't - but it's lower.
There's dependency between measures (repeated measures on both variable sets). Cld that explain it?
(apologies for reposting across social media channels but am anticipating may not get a reply!)
#statistics

pierre_bellec,

@deevybee hi Dorothy, I agree that dependencies between tests may explain this observation. For a given threshold alpha and number of tests N, you'll expect on average N x alpha discoveries. If there are dependencies between the tests, it will not change the average number of discoveries, but it will impact the variance of the number of discoveries across replications. I think positive dependencies will inflate the variance (you are effectively averaging the outcome of a lesser number of tests than apparent). So if you use a test on the number of discoveries which ignores dependencies, you may end up concluding the number of discoveries is significantly lower than expected by chance. It's analogous to the difference between the Benjamini-Hochberg FDR procedure (which ignores dependencies) and the Benjamini-Yekutieli FDR procedure (which is robust to dependencies, but much more stringent).

jonny, (edited ) to random
@jonny@neuromatch.social avatar

assuming everything else looks good, would not responding to any issues and pull requests by itself be enough for you not to use an open source project in a mission critical context?

Edit at 5 votes: assume there are a reasonable number of new, nontrivial issues or PRs

pierre_bellec,

@jonny I find it hard to answer in a vacuum. In general, it's a hard no. I have a specific counter-example (gym-retro) where development is minimum (and actually stopped for a while). We've been using it for years to run experiments because there is no alternative and we have all the features we need. Also code base is excellent and have not run into an issue yet. It's just that the company behind the project (openAI) stopped development. So count me in the "it depends" category.

baldur, to random
@baldur@toot.cafe avatar

“Visual Studio Code is designed to fracture”

A fairly convincing argument that VS Code is a mechanism for making open source communities dependent on MS. https://ghuntley.com/fracture/

pierre_bellec,

@baldur when MS killed atom (GRRR) I moved to pulsar, and very happy about it https://pulsar-edit.dev/

pierre_bellec, to random

This neurolibre preprint is probably unlike anything you've seen before. The science by Mathieu Boudreau, @agahkarakuzu and a large team of collaborators is fantastic, but I'm talking about the tech used for the preprint itself here. First it's not just a lame pdf preprint. It's got an html version filled with interactive figures, and even a dashboard! But that's not what's unique. What really matters is that it is fully reproducible, and has been tested for it. By clicking on the small rocket, you can reproduce the figures yourself, from your browser. All the data, all the code, all the dependencies have been published alongside the preprint, and the figures have been generated by the neurolibre servers, not by the authors! Each reproducibility artefact has its doi, and they are cleanly linked to the doi of the preprint. It is indexed by google scholar, orcid and the like. Neurolibre is based on the amazing Jupyter Book project, and authors can do 99% of the work themselves just by using Jupyter Book and the Neurolibre technical docs. The technical screening of the submission is automatized to a very large extent (it's been adapted from the awesome workflow of the journal of open source software). Check the publication process out, it's on github! https://github.com/neurolibre/neurolibre-reviews/issues/14 Disclaimer: I'm part of the Neurolibre development team. It's been a team effort (see details here, but all of the recent heavy lifting on the platform has been done by @agahkarakuzu If I can say so myself, this really feels like the publication from the (reproducible) future. Please consider making your next publication a living research object, and submit to Neurolibre, it's open for beta! This project is part of the Canadian Open Neuroscience Platform (https://conp.ca/), funded by Brain Canada and several partners, including the Courtois foundation, the Montreal Heart Institute, and Cancer Computers.

jonny, to random
@jonny@neuromatch.social avatar

the thing about the reliability of scientific literature is that most scientists I know a) write their own analysis code and b) do not know what software tests are.

pierre_bellec,

@jonny @tdverstynen @neuralreckoning I think part of the solution is to adopt high quality community-driven libraries which are highly flexible. scikit-learn and nilearn and fantastic example. This limits the need for in-house code, and offers trustworthy implementations. Extensive efforts for testing / review should certainly be allocated to this type of community resources. For individual papers, the truth is that most new methods do not add enough value to really catch on. So extensive testing may be a poor investment of time/resource if it's never re-used. Making a code robust to a diversity of use case is very hard, but ensuring it behaves on a single dataset and use case seems achievable to me with limited engineering. At least there needs to be a trade-off and some "good enough" standards for what is ultimately a rough proof of concept (aka a research paper).

pierre_bellec, to random

I am finally getting to rebuild my follow circles on mastodon, which I lost when I migrated server. I am sadly discovering that most of the people I used to follow have stopped posting months ago :( :( :( I very much hope that my former science twitter is going to somehow resurrect on here. In the meantime, I'm going to try and build a brand new (active) mastodon circle here. Please send suggestions of your favorite people to follow! neuroAI in particular, but any cog neuroscience will do.

pierre_bellec,

@albertcardona Thanks for the explanations! Very kind of you. I'm going to take time to dive into hashtags and server feeds!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines