codewiz, to rust
@codewiz@mstdn.io avatar

Got #DeepSeek Coder 33B running on my desktop's #AMDGPU card with #ollama.

First off, I tested its ability to generate and understand #Rust code. Unfortunately, it falls into the same confusion of the smaller 6.7B model.

https://gist.github.com/codewiz/c6bd627ec38c9bc0f615f4a32da0490e
#ollama #llm #deepseek

codewiz,
@codewiz@mstdn.io avatar

Today I tried running Codestral, a 22B parameter LLM tuned for coding by Mistral AI.

With my Rust mock interview questions, it performed better than all other offline models I tried so far.

https://paste.benpro.fr/?4eb8f2e15841672d#DGnLh3dCp7UdzvWoJgev58EPmre19ij31KSbbq8c85Gm

#coding #rust #llm #ollama #gpt #programming

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Some progress on applet for

Now we can save and load full conversations. Also from now you can keep you messages context.

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Some updates on applet for :)

improved layouts/settings page so now i think everything is not looks like out of place haha

added load, save, remove conversations from history

also now it's possible to pull and remove models locally.

ahh, and beautiful button to stop current typing by bot, so if we don't like what he is saying, we can stop him while typing :)

image/png

joe, to ai

LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.

Basic CLI Example

Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.

To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.

Its ability to describe an image is pretty awesome.

Basic Web Service

So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.

The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-18-at-1.41.20%E2%80%AFPM.png?resize=1024%2C729&ssl=1

Have any questions, comments, etc? Feel free to drop a comment, below.

https://jws.news/2024/how-can-you-use-llava-and-node-js-to-describe-an-image/

elevenhsoft, to System76 Polish
@elevenhsoft@mastodon.social avatar

Hello friends! New #COSMICdesktop applet is coming soon....

This time I'm working on Ollama applet for our lovely #COSMIC :)

#popos #system76 #linux #ollama

kevinctofel, to ai
@kevinctofel@hachyderm.io avatar

Interesting local / #private #AI #search in-progress project worth watching: Perplexica. Aims to be similar to #Perplexity but has a ways to go yet. Works with #Ollama, which is what I’m using on #Linux to test local AI.

https://youtu.be/TkxmOC4HBSg?si=L9uCF9ePlT7Ccs6t

elevenhsoft, to random Polish
@elevenhsoft@mastodon.social avatar

we can now attach images to applet for :)

doing it just fine ^^

joe, to ai

Previously, we looked at how to build a retrieval-augmented generation system using LangChain. As of last month, you can do the same thing with just the Ollama Python Library that we used in last month’s How to Write a Python App that uses Ollama. In today’s post, I want to use the Ollama Python Library, Chroma DB, and the JSON API for Kopp’s Frozen Custard to embed the flavor of the day for today and tomorrow.Let’s start with a very basic embedding example.

In the above example, we start by building an array of things that we want to embed, embed them using nomic-embed-text and Chroma DB, and then use llama3:8b for the main model.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.32.52%E2%80%AFPM.png?resize=1024%2C800&ssl=1

So, how do you get the live data for the flavors of the day? The API, of course!

This simple script gets the flavor of the day from a JSON API, builds an array of embedable strings, and prints the result.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.44.23%E2%80%AFPM.png?resize=1024%2C800&ssl=1

The next step is to combine the two scripts.

Two big differences that you will notice between the other two examples and this one is that the date no longer contains the year and I added a statement of what today’s date is, so that you can ask for “Today’s flavors”.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.56.59%E2%80%AFPM.png?resize=1024%2C800&ssl=1

If you have any questions on how this works, later on today I am hosting a live webinar on Crafting Intelligent Python Apps with Retrieval-Augmented Generation. Feel free to stop by and see how to build a RAG system.

https://jws.news/2024/how-to-get-ai-to-tell-you-the-flavor-of-the-day-at-kopps/

#AI #ChromaDB #llama3 #LLM #Ollama #Python #RAG

marcusgreen, to RaspberryPi

AI On the #RaspberryPI

https://thepihut.com/products/raspberry-pi-ai-kit

“The Raspberry Pi AI Kit bundles an M.2-format Hailo 8L AI accelerator with the Raspberry Pi M.2 HAT+ to provide an accessible, cost-effective, and power-efficient way to integrate high-performance AI with the Raspberry Pi 5."

£65.7 incl. VAT (coming soon)

Could it accelerate systems like #Ollama ? as I am interested in having a Pi 5 running as a “Sidecar” to offer AI services to #MoodleBox (#Moodle on the Pi)

itsfoss, to ai
@itsfoss@mastodon.social avatar

Want to use a web UI for Ollama? Here are your best options:

https://itsfoss.com/ollama-web-ui-tools/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • Durango
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • lostlight
  • All magazines