In #homeassistant, using #nodered to make an API call to a #llamacpp server running #mistral 7B model. I create a prompt that asks it to summarize all the data in my house from the sensors. The results are pretty impressive for such a little model. Now I get a customized rundown, Jarvis style.
Useful? Probably not. But cool as hell. :cool_skelly:
(1/3) Last Friday, I was planning to watch Masters of the Air ✈️, but my ADHD had different plans 🙃, and I ended up running a short POC and creating a tutorial for getting started with Ollama Python 🚀. The settings are available for both Docker 🐳 and locally.
TLDR: It is straightforward to run LLM models locally with the Ollama Python library. Models with up to ~7B parameters run smoothly with low compute resources.
Mixtral-8x22b keeps asking feedback on how it can improve even though it has no memory. lol "I understand that our conversation will not be used directly to improve my model, but the feedback you provide can still help me understand your needs better and improve my responses in future interactions with you or other users. If there are any specific areas where you feel I could improve, please let me know so that I can address those concerns in our future conversations." #LLM#Mistral#AI
Hey fellow #AI / #LLM#nerds, I've discovered a weird issue with some LLMs I've been foolin' around with. I'm using #oobabooga, and Some models seem to just quit when formatting block text.
For example, the above is what some models do. Others don't seem to care. Something about the ``` token makes it give up. Seen anything like this?
Does anyone have a good list of logical questions to judge large language models ability to reason?
Questions like "if it takes 3 hours for 3 towels to dry, how long does it take for 9 towels to dry?"
I'm playing around with Mistrals leaked 70b Miqu LLM and want to test it's reasoning skills for a project I'm working on. I've been really impressed so far. It's slower than Mistral & Mixtral but it's been producing the best reasoned answers I've seen from an LLM. And it's running locally!
600 km long #rollcloud (official cloud atlas name: #volutus) over the western Mediterranean. It is caused by a southwards moving pressure wave, which itself is formed by a shallow cold air outbreak, canalized and accelerated through the Rhone and Aude valleys (#Mistral/#Cers).
Running Mistral LLM locally with Ollama's 🦙 new Python 🐍 library inside a dockerized 🐳 environment with the allocation of 4 CPUs and 8 GB RAM. It took 19 sec to get a response 🚀. The last time I tried to run LLM locally, it took 10 minutes to get a response 🤯
#Mistral Instruct 7B v0.2 has a strange quirk I found tonight. I pasted in a C# class and asked it to generate #XMLDoc comments for everything. I specifically asked it to not rewrite the content of the methods and such, and it always rewrote the whole thing. It was accurate and the comments were great, but it always wants to redo the whole thing. The line between what it can and can't do is mysterious!
Saw #Mistral was trending on Twitter™ and got exciting thinking maybe something new dropped. It was just tons of people misspelling "mistrial" in relation to the latest #Trump junk. :trump_sadge:
My #HomeAssistant setup reminds me to take my #medicine if I haven't taken it on time. Spent some time rigging it up to connect to a #Mistral#AI. With a clever prompt, the reminder I get every night is now customized, encouraging, and inspiring. 😎
Looking into other AI customized elements to work into my setup.
Any european competiton to OpenAI is welcome , good to see Mistral coming with a new model for programming, Codestral. https://mistral.ai/news/codestral/
#AI#EU#GenerativeAI#Mistral#Microsoft#BigTech#Monopolies#SiliconValley: "Max von Thun, Europe director at the Open Markets Institute, told Jacobin that the new partnership between Microsoft and Mistral AI is symptomatic of the “huge structural concentration that you see in the tech sector, which is not new, which has been around for a long time, but which has basically put the big tech companies in a position to essentially co-opt or neutralize any potential players in AI who might challenge them directly.”
Mistral has built its identity around its open-source model that can be modified and adapted by clients. What it stands to gain from its partnership with Microsoft is access to the latter’s enormous computing power and key position in market infrastructure.
“Here’s the catch: I can build an open-source model, but the challenge is to get it to the market and to the customer. As a company, that is what I care about,” Kris Shrishak, senior fellow at the Irish Council for Civil Liberties, told Jacobin. “Distribution is a problem because they’re still a business. They need to make money. Microsoft gives them a pathway to that, by integrating it and offering it on their Azure marketplace." https://jacobin.com/2024/03/mistral-france-eu-monopoly-ai-regulation
Mistral é um MANPADS infravermelho fabricado pela empresa multinacional europeia MBDA sistemas de mísseis (anteriormente Matra BAe Dynamics ). Baseado no SATCP francês ( Sol-Air à Très Courte Portée ), (caminhoesdomundotododetodososmodelos.blogspot.com) Portuguese
caminhoes tratores navios e tudo sobre transportes