RossGayler, to mathematics
@RossGayler@aus.social avatar

Maths/CogSci/MathPsych lazyweb: Are there any algebras in which you have subtraction but don't have negative values? Pointers appreciated. I am hoping that the abstract maths might shed some light on a problem in cognitive modelling.

The context is that I am interested in formal models of cognitive representations and I want to represent things (e.g. cats), don't believe that we should be able to represent negated things (i.e. I don't think it should be able to represent anti-cats), but it makes sense to subtract representations (e.g. remove the representation of a cat from the representation of a cat and a dog, leaving only the representation of the dog).

This might also be related to non-negative factorisation: https://en.wikipedia.org/wiki/Non-negative_matrix_factorization

#mathematics #algebra #AbstractAlgebra #CogSci @cogsci #CognitiveScience #MathPsych #MathematicalPsychology

RossGayler,
@RossGayler@aus.social avatar

@Heterokromia @cogsci

Thanks. Modulo arithmetic is actually of interest for other reasons but I think it's not quite what I'm after here.

Using your arithmetic example and assumming rep(cat) = 1 and rep(dog) = 2 I would want behaviours like:

rep(dog and cat) = 2 + 1 = 3
3 - 2 = 1
3 - 1 = 2
2 - 2 = 0
2 - 1 = 2
1 - 2 = 1

I suspect that means that the objects of the algebra have to be multidimensional, rather than unidimensional (as numbers appear to be).

mapto,
@mapto@qoto.org avatar

@RossGayler @Heterokromia @cogsci to me it seems you need to be more clear on your requirements. Are your non-negative and multidimensional requirements independent, as far as you can tell?

If so, a multidimensional (do you know how many dimensions/animals you have?) modulo space sounds a viable solution. That'd be something denoted as https://www.HostMath.com/Show.aspx?Code=%5Cmathbb%7BZ%7D_k%5En , with k being the cardinality of one dimension (would they need to have different cardinalities?), and n being the number of dimensions.

RossGayler, to machinelearning
@RossGayler@aus.social avatar

Most of the Artificial Neural Net simulation research I have seen (say, at venues like NeurIPS) seems to take a very simple conceptual approach to analysis of simulation results - just treat everything as independent observations with fixed effects conditions, when it might be better conceptualised as random effects and repeated measures. Do other people think this? Does anyone have views on whether it would be worthwhile doing more complex analyses and whether the typical publication venues would accept those more complex analyses? Are there any guides to appropriate analyses for simulation results, e.g what to do with the results coming from multi-fold cross-validation (I presume the results are not independent across folds because they share cases).

@cogsci #CogSci #CognitiveScience #MathPsych #MathematicalPsychology #NeuralNetworks #MachineLearning

jonny,
@jonny@neuromatch.social avatar

@RossGayler
Aha, well yes it entirely depends on the question at hand and the experimental design. So eg. One major distinction is whether you are trying to say something about a model, a family of models, or the data. Parametric statistics is for inference over samples of a definable population, so eg. a point estimate of accuracy on held out data is fine if all youre trying to do is make a claim about a single model since there is no "population" you are sampling from. If youre trying to make a claim about a class of models then now you are sampling from the (usually) real valued, n-dimensional model space, so there the usual requirements for random sampling within parameter space would apply.

Making a claim about the data is much different, because now you have a joint analysis problem of "the effects of my model" and "the effects of the data" (neuroscientists love to treat the SVMs in their "decoding" analyses as neutral and skip that part, making claims about the data by comparing eg. Classification accuracies as if they were only dependent on the data. Even randomly sampling the subspace there doesnt get rid of that problem because different model architectures, training regimes, etc. Have different capacities for classifying different kinds of source data topologies, but I digress.)

For methods questions like this I try and steer clear of domain specific papers and go to the stats lit or even stats textbooks, because domain specific papers are translations of translations, and often have uh motivated reasoning. For example, the technique "representational similarity analysis" in neuro is wholly unfounded on any kind of mathematical or statistical proof or theory, and yet it flourishes because it sounds sorta ok and allows you to basically "choose your own adventure" to produce whatever result you want.

For k-fold, its a traditional repeated measures problem (depending on how you set it up). The benchmarking paradigm re: standard datasets and comparing accuracy is basically fine if the claim you are making is exactly "my model in particular is more accurate on this particular set of benchmarks." Youre right that even for that, to get some kind of aggregated accuracy you would want an MLM with dataset as random effect, but since the difference in datasets is often ill defined and as you say based in convenience im not sure how enlightening it would be.

Would need more information on the specific question you had in mind to recommend lit, and I am not a statistician I just get annoyed with lazy dogshit and think stats and topology (which is relevant bc many neuro problems devolve into estimating metric spaces) is interesting rather than a nuisance.

neuralreckoning,
@neuralreckoning@neuromatch.social avatar

@jonny @RossGayler @cogsci I'm very ignorant of statistics, but yeah I agree ML publications are usually pretty poor on this.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • InstantRegret
  • mdbf
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • rosin
  • thenastyranch
  • Youngstown
  • Durango
  • slotface
  • everett
  • kavyap
  • DreamBathrooms
  • JUstTest
  • khanakhh
  • ethstaker
  • cubers
  • normalnudes
  • tester
  • GTA5RPClips
  • cisconetworking
  • ngwrru68w68
  • megavids
  • provamag3
  • Leos
  • modclub
  • lostlight
  • All magazines