“The Raspberry Pi AI Kit bundles an M.2-format Hailo 8L AI accelerator with the Raspberry Pi M.2 HAT+ to provide an accessible, cost-effective, and power-efficient way to integrate high-performance AI with the Raspberry Pi 5."
£65.7 incl. VAT (coming soon)
Could it accelerate systems like #Ollama ? as I am interested in having a Pi 5 running as a “Sidecar” to offer AI services to #MoodleBox (#Moodle on the Pi)
In the above example, we start by building an array of things that we want to embed, embed them using nomic-embed-text and Chroma DB, and then use llama3:8b for the main model.
Two big differences that you will notice between the other two examples and this one is that the date no longer contains the year and I added a statement of what today’s date is, so that you can ask for “Today’s flavors”.
Interesting local / #private#AI#search in-progress project worth watching: Perplexica. Aims to be similar to #Perplexity but has a ways to go yet. Works with #Ollama, which is what I’m using on #Linux to test local AI.
LLaVA (Large Language-and-Vision Assistant) was updated to version 1.6 in February. I figured it was time to look at how to use it to describe an image in Node.js. LLaVA 1.6 is an advanced vision-language model created for multi-modal tasks, seamlessly integrating visual and textual data. Last month, we looked at how to use the official Ollama JavaScript Library. We are going to use the same library, today.
Basic CLI Example
Let’s start with a CLI app. For this example, I am using my remote Ollama server but if you don’t have one of those, you will want to install Ollama locally and replace const ollama = new Ollama({ host: 'http://100.74.30.25:11434' }); with const ollama = new Ollama({ host: 'http://localhost:11434' });.
To run it, first run npm i ollama and make sure that you have "type": "module" in your package.json. You can run it from the terminal by running node app.js <image filename>. Let’s take a look at the result.
Its ability to describe an image is pretty awesome.
Basic Web Service
So, what if we wanted to run it as a web service? Running Ollama locally is cool and all but it’s cooler if we can integrate it into an app. If you npm install express to install Express, you can run this as a web service.
The web service takes posts to http://localhost:4040/describe-image with a binary body that contains the image that you are trying to get a description of. It then returns a JSON object containing the description.