I made good progress on the #AI collaboration project with @Jorvon_Moss over the weekend. The #Nvidia Orin Nano boots up the servers and WiFi hotspot automatically. You just need to run Hopper Chat on the #RaspberryPi. No internet required! #LLM#ChatGPT
#LLM bawi i uczy!
Jednak ta cała tzw. sztuczna inteligencja nie jest taka zła jak ją malują. Dzisiaj dzięki jej pomocy, nauczyłem się korzystać z API #Mastodon do wyświetlania określonych wpisów w przeglądarce 😁
Jestem o krok od stworzenia własnego klienta!
(2/2) The book covers the following topics:
✅ Mathematical foundations of machine learning and NLP
✅ Data preprocessing techniques for text data
✅ Machine learning applications for NLP and text classification
✅ Deep learning methods for NLP and text applications
✅ Theory and design of Large Language Models
✅ Applications of LLM models
✅ LLM applications with Langchain
The book is for folks who are interested in getting started with NLP and those who wish to delve into LLM applications.
This generative model allows you to sketch out a scene with a few words, it then leverages an LLM to flesh out the details, with the ultimate goal of feeding those details to a downstream visual image generation model.
It is almost, but not quite, entirely the inverse of image captioning models.
This offers the closest experience to an image generation tool that's usable by people with visual impairments.
1/3 I tested some popular latest LLM UIs for accessibility with screen readers, including oobabooga text-generation-webui, Open WebUI (aka Ollama WebUI), GPT4All, LM Studio, Koboldcpp, and Llama.cpp server on Windows. The most accessible was Llama.cpp server, though it had the fewest features. Oobabooga was also good, except for the list box not announcing choices as you browse; however, you can check your selection afterward. #accessibility#LLM#AI
GPT-4o, OpenAI's latest language model that has just been made freely available, has major safety flaws, an investigation by Radio-Canada's disinformation-busting unit, Décrypteurs, has uncovered.
One tip for using ChatGPT that I am going to start using is to save prompts as a starting point so that I can always set the expectations for the rest of the thread. One way I use it is with shell scripting. I can give it all of my code style preferences and general code standards with a saved prompt which will save time when I start a new thread. #ChatGPT#LLM
“When I say ‘I am hungry’, I am reporting on my sensed physiological states. When an LLM generates the sequence ‘I am hungry’, it is simply generating the most probable completion of the sequence of words in its current prompt.”
Any european competiton to OpenAI is welcome , good to see Mistral coming with a new model for programming, Codestral. https://mistral.ai/news/codestral/
In the above example, we start by building an array of things that we want to embed, embed them using nomic-embed-text and Chroma DB, and then use llama3:8b for the main model.
Two big differences that you will notice between the other two examples and this one is that the date no longer contains the year and I added a statement of what today’s date is, so that you can ask for “Today’s flavors”.
In welchem ich als Ergänzung zu meinem vorherigen Artikel einmal die Installation und den Gebrauch von Ollama demonstriere.
Wir installieren Ollama, laden mistral:instruct und verwenden den Ollama Prompt auf einem Mac mini oder einem Windows-Rechner mit Nvidia, um einen Text zusammenfassen zu lassen.
Yeah - I appreciate Sal's enthusiasm for AI but the combination of the current generation of bots confidently saying nonsense to kids seems ... problematic.
Quelle surprise. But what's worrying is how apparently 'tech 'n meeja' savvy young people are so easily taken in by propaganda - err, I mean 'hype and marketing'.
"...Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests...
"...But the study... says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech..."