ramikrispin, to ArtificialIntelligence
@ramikrispin@mstdn.social avatar

(1/2) Congratulations to my friend Lior and his co-author Meysam for the release of their new book - Mastering NLP from Foundations to LLMs πŸŽ‰

I met Lior a few years ago at a conference, and since then, I have been following his work in the field of NLP ❀️.

#nlp #python #machinelearning #deeplearning #DataScience #LLM

ramikrispin,
@ramikrispin@mstdn.social avatar

(2/2) The book covers the following topics:
βœ… Mathematical foundations of machine learning and NLP
βœ… Data preprocessing techniques for text data
βœ… Machine learning applications for NLP and text classification
βœ… Deep learning methods for NLP and text applications
βœ… Theory and design of Large Language Models
βœ… Applications of LLM models
βœ… LLM applications with Langchain

The book is for folks who are interested in getting started with NLP and those who wish to delve into LLM applications.

telescoper.blog, to ai
@telescoper.blog@telescoper.blog avatar

Before I head off on a trip to various parts of not-Barcelona, I thought I’d share a somewhat provocative paper by David Hogg and Soledad Villar. In my capacity as journal editor over the past few years I’ve noticed that there has been a phenomenal increase in astrophysics papers discussing applications of various forms of Machine Leaning (ML). This paper looks into issues around the use of ML not just in astrophysics but elsewhere in the natural sciences.

The abstract reads:

Machine learning (ML) methods are having a huge impact across all of the sciences. However, ML has a strong ontology – in which only the data exist – and a strong epistemology – in which a model is considered good if it performs well on held-out training data. These philosophies are in strong conflict with both standard practices and key philosophies in the natural sciences. Here, we identify some locations for ML in the natural sciences at which the ontology and epistemology are valuable. For example, when an expressive machine learning model is used in a causal inference to represent the effects of confounders, such as foregrounds, backgrounds, or instrument calibration parameters, the model capacity and loose philosophy of ML can make the results more trustworthy. We also show that there are contexts in which the introduction of ML introduces strong, unwanted statistical biases. For one, when ML models are used to emulate physical (or first-principles) simulations, they introduce strong confirmation biases. For another, when expressive regressions are used to label datasets, those labels cannot be used in downstream joint or ensemble analyses without taking on uncontrolled biases. The question in the title is being asked of all of the natural sciences; that is, we are calling on the scientific communities to take a step back and consider the role and value of ML in their fields; the (partial) answers we give here come from the particular perspective of physics

arXiv:2405.18095

P.S. The answer to the question posed in the title is probably β€œyes”.

https://telescoper.blog/2024/05/30/is-machine-learning-good-or-bad-for-the-natural-sciences/

#AI #ArtificialIntelligence #arXiv240518095 #Astrophysics #Cosmology #DataScience #deepLearning #MachineLearning

metin, to ai
@metin@graphics.social avatar

𝘝𝘦𝘳𝘺 𝘍𝘦𝘸 π˜—π˜¦π˜°π˜±π˜­π˜¦ 𝘈𝘳𝘦 𝘜𝘴π˜ͺ𝘯𝘨 'π˜”π˜Άπ˜€π˜© 𝘏𝘺𝘱𝘦π˜₯' 𝘈𝘐 π˜—π˜³π˜°π˜₯𝘢𝘀𝘡𝘴 π˜“π˜ͺ𝘬𝘦 π˜Šπ˜©π˜’π˜΅π˜Žπ˜—π˜›, 𝘚𝘢𝘳𝘷𝘦𝘺 𝘍π˜ͺ𝘯π˜₯𝘴

https://slashdot.org/story/24/05/30/0238230/very-few-people-are-using-much-hyped-ai-products-like-chatgpt-survey-finds

daniel,
@daniel@social.dhelonious.de avatar

@metin That's interesting because in my circle (tech-savvy nerds and researchers) a lot of people use and recommend the use of ChatGPT. For example, the tutor of a scientific containerization course I attended last week used ChatGPT extensively to solve some very specific problems. Of course, you could get the same results using search engines, but an AI is much faster in these cases and can at least point you in the right direction.

metin,
@metin@graphics.social avatar

@daniel Yes, I think it's a matter of time before AI will be widely used. Personally, I barely use ChatGPT, because I don't trust the output yet, due to the hallucinations. I'm waiting until that has been solved. But I know that it's already usable for exact purposes like coding.

lampinen, to ArtificialIntelligence
@lampinen@sigmoid.social avatar

How well can we understand an LLM by interpreting its representations? What can we learn by comparing brain and model representations? Our new paper (https://arxiv.org/abs/2405.05847) highlights intriguing biases in learned feature representations that make interpreting them more challenging! 1/9
#intrepretability #deeplearning #representation #transformers

lampinen,
@lampinen@sigmoid.social avatar

For example, if we train a model to compute a simple, linear feature and a hard, highly non-linear one, the easy feature is naturally learned first, but both are generalized perfectly by the end of training. However, the easy feature dominates the representations! 3/9

lampinen,
@lampinen@sigmoid.social avatar

This paper is really just us finally following up on a weird finding about RSA (figure on the here) from a paper Katherine Hermann & I had at NeurIPS back in the dark ages (2020): https://x.com/khermann_/status/1323353860283326464
Thanks to my coauthors @scychan_brains & Katherine! 9/9

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Fine Tuning LLM Models – Generative AI Course πŸ‘‡πŸΌ

FreeCodeCamp released today a new course for fine tuning LLM models. The course, by Krish Naik, focuses on different tuning methods such as QLORA, LORA, and Quantization using different models such as Llama2, Gradient, and Google Gemma model.

πŸ“½οΈ: https://www.youtube.com/watch?v=iOdFUJiB0Zc

#llm #DataScience #MachineLearning #genai #deeplearning

HxxxKxxx, to ArtificialIntelligence German
@HxxxKxxx@det.social avatar

Vom 16.9.-19.9.2024 richten wir an der UniversitΓ€t zu KΓΆln wieder eine Sommerschule zum Thema
"Deep Learning for Language Analysisβ€œ aus,

Weitere Informationen: http://ml-school.uni-koeln.de/

SIB, to ArtificialIntelligence
@SIB@mstdn.science avatar

β€œThe Protein Universe Atlas is a groundbreaking resource for exploring the diversity of proteins. Its user-friendly web interface empowers researchers, biocurators and, students in navigating the β€œdark matter” to explore proteins of unknown function.”

πŸ₯ That’s what the committee said about this work, one of the #SIBRemarkableOutputs 2023 πŸ‘

πŸ‘‰ Find out more about this and the other outputs: https://tinyurl.com/ye2yrpxx

#deeplearning #proteins

video/mp4

koen, to ArtificialIntelligence
@koen@procolix.social avatar

Paul Gerke presents on #deeplearning infrastructure for #medical #image #analysis at @nluug #nluug #vj2024

metin, (edited ) to ai
@metin@graphics.social avatar

So… Big Tech is allowed to blatantly steal the work, styles and therewith the job opportunities of thousands of artists and writers without being reprimanded, but it takes similarity to the voice of a famous actor to spark public outrage about AI. πŸ€”

https://www.theregister.com/2024/05/21/scarlett_johansson_openai_accusation/

#AI #ArtificalIntelligence #ML #MachineLearning #DeepLearning #LLM #LLMs #OpenAI #SamAltman

rubinjoni,
@rubinjoni@mastodon.social avatar

@metin Better late, than never.

metin,
@metin@graphics.social avatar

@rubinjoni Definitely. πŸ‘

ramikrispin, to machinelearning
@ramikrispin@mstdn.social avatar

MLX Examples πŸš€

The MLX is Apple's framework for machine learning applications on Apple silicon. The MLX examples repository provides a set of examples for using the MLX framework. This includes examples of:
βœ… Text models such as transformer, Llama, Mistral, and Phi-2 models
βœ… Image models such as Stable Diffusion
βœ… Audio and speech recognition with OpenAI's Whisper
βœ… Support for some Hugging Face models

πŸ”— https://github.com/ml-explore/mlx-examples

#MachineLearning #llm #deeplearning #DataScience #Python

Lobrien,

@ramikrispin @BenjaminHan How do this and corenet (https://github.com/apple/corenet) fit together? The corenet repo has examples for inference with MLX for models trained with corenet; is that it, does MLX not have, e.g., activation and loss fns, optimizers, etc.?

ramikrispin,
@ramikrispin@mstdn.social avatar

@Lobrien @BenjaminHan The corenet is deep learning application where the MLX is array framework for high performance on Apple silicon. This mean that if you are using mac with M1-3 CPU it should perform better when using MLX on the backend (did not test it myself)

metin, (edited ) to ai
@metin@graphics.social avatar
metin, to ai
@metin@graphics.social avatar
ramikrispin, to ArtificialIntelligence
@ramikrispin@mstdn.social avatar

(1/2) MIT Introduction to Deep Learning πŸš€πŸš€πŸš€

MIT launched the 2024 edition of the Introduction to Deep Learning course by Prof. Alexander Amini and Prof.Ava Amini. The course started at the end of April and will run until June. The course lectures are published weekly. The course syllabus keeps changing from year to year, reflecting the rapid changes in this field.

#deeplearning #MachineLearning #DataScience #AI #genai #python

ramikrispin,
@ramikrispin@mstdn.social avatar

(2/2) The course covers the following topics:
βœ… Deep learning foundation
βœ… Computer vision
βœ… Deep generative modeling
βœ… Reinforcement learning
βœ… Robot learning
βœ… Text to image

Resources πŸ“š
Course website πŸ”—: http://introtodeeplearning.com/
Video lectures πŸ“½οΈ: https://www.youtube.com/playlist?list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI

ramikrispin, to datascience
@ramikrispin@mstdn.social avatar

(1/2) Happy Tuesday! β˜€οΈ

Deep Generative Models - New Stanford Course πŸš€πŸ‘‡πŸΌ

Stanford University released a new course last week focusing on Deep Generative Models. The course, by Prof. Stefano Ermon, focuses on the models beyond GenAI models.

#genai #DataScience #MachineLearning #deeplearning

metin, to ai
@metin@graphics.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • β€’
  • JUstTest
  • mdbf
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • GTA5RPClips
  • ethstaker
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • tacticalgear
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • lostlight
  • All magazines