joe, to ai

Previously, we looked at how to build a retrieval-augmented generation system using LangChain. As of last month, you can do the same thing with just the Ollama Python Library that we used in last month’s How to Write a Python App that uses Ollama. In today’s post, I want to use the Ollama Python Library, Chroma DB, and the JSON API for Kopp’s Frozen Custard to embed the flavor of the day for today and tomorrow.Let’s start with a very basic embedding example.

In the above example, we start by building an array of things that we want to embed, embed them using nomic-embed-text and Chroma DB, and then use llama3:8b for the main model.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.32.52%E2%80%AFPM.png?resize=1024%2C800&ssl=1

So, how do you get the live data for the flavors of the day? The API, of course!

This simple script gets the flavor of the day from a JSON API, builds an array of embedable strings, and prints the result.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.44.23%E2%80%AFPM.png?resize=1024%2C800&ssl=1

The next step is to combine the two scripts.

Two big differences that you will notice between the other two examples and this one is that the date no longer contains the year and I added a statement of what today’s date is, so that you can ask for “Today’s flavors”.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-30-at-10.56.59%E2%80%AFPM.png?resize=1024%2C800&ssl=1

If you have any questions on how this works, later on today I am hosting a live webinar on Crafting Intelligent Python Apps with Retrieval-Augmented Generation. Feel free to stop by and see how to build a RAG system.

https://jws.news/2024/how-to-get-ai-to-tell-you-the-flavor-of-the-day-at-kopps/

#AI #ChromaDB #llama3 #LLM #Ollama #Python #RAG

joe, to ai

A few weeks back, I thought about getting an AI model to return the “Flavor of the Day” for a Culver’s location. If you ask Llama 3:70b “The website https://www.culvers.com/restaurants/glendale-wi-bayside-dr lists “today’s flavor of the day”. What is today’s flavor of the day?”, it doesn’t give a helpful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.29.28%E2%80%AFPM.png?resize=1024%2C690&ssl=1

If you ask ChatGPT 4 the same question, it gives an even less useful answer.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.33.42%E2%80%AFPM.png?resize=1024%2C782&ssl=1

If you check the website, today’s flavor of the day is Chocolate Caramel Twist.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.41.21%E2%80%AFPM.png?resize=1024%2C657&ssl=1

So, how can we get a proper answer? Ten years ago, when I wrote “The Milwaukee Soup App”, I used the Kimono (which is long dead) to scrape the soup of the day. You could also write a fiddly script to scrape the value manually. It turns out that there is another option, though. You could use Scrapegraph-ai. ScrapeGraphAI is a web scraping Python library that uses LLM and direct graph logic to create scraping pipelines for websites, documents, and XML files. Just say which information you want to extract and the library will do it for you.

Let’s take a look at an example. The project has an official demo where you need to provide an OpenAI API key, select a model, provide a link to scrape, and write a prompt.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-12.35.29%E2%80%AFPM.png?resize=1024%2C660&ssl=1

As you can see, it reliably gives you the flavor of the day (in a nice JSON object). It will go even further, though because if you point it at the monthly calendar, you can ask it for the flavor of the day and soup of the day for the remainder of the month and it can do that as well.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-1.14.43%E2%80%AFPM.png?resize=1024%2C851&ssl=1

Running it locally with Llama 3 and Nomic

I am running Python 3.12 on my Mac but when you run pip install scrapegraphai to install the dependencies, it throws an error. The project lists the prerequisite of Python 3.8+, so I downloaded 3.9 and installed the library into a new virtual environment.

Let’s see what the code looks like.

You will notice that just like in yesterday’s How to build a RAG system post, we are using both a main model and an embedding model.

So, what does the output look like?

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-09-at-2.28.10%E2%80%AFPM.png?resize=1024%2C800&ssl=1

At this point, if you want to harvest flavors of the day for each location, you can do so pretty simply. You just need to loop through each of Culver’s location websites.

Have a question, comment, etc? Please feel free to drop a comment, below.

https://jws.news/2024/how-to-use-ai-to-make-web-scraping-easier/

hcj, to random
@hcj@fosstodon.org avatar

Someone has made a website that allows you to run and other locally within a

Website: https://secretllama.com/
Code: https://github.com/abi/secret-llama

obrhoff, to llm

The amazing thing about LLMs is how much knowledge they posess in their small size. The llama3-8b model, for instance, weighs only 4.7GB yet can still answer your questions about everything (despite some hallucinations).
#llm #ai #ollama #llama3

noplasticshower,
@noplasticshower@zirk.us avatar

@obrhoff being DEAD WRONG is not really a "hallucination"...but your point is well taken. Cramming information into a smaller space is amazing.

When you re-represent and compress information in the long tails of gradient Gaussians disappears.

boilingsteam, to llm
@boilingsteam@mastodon.cloud avatar

Run llama3 locally with 1M token context: https://ollama.com/library/llama3-gradient #llm #llama3 #context #large

pjk, to python

One thing you notice right away about LLMs is they bear a striking resemblance to that ubiquitous internet character, the reply-guy: they always have an answer, they are brimming with confidence, and they are often completely wrong.

It seemed only natural, then, to turn one into a full-time online version of itself, RGaaS (Reply-Guy as a Service), always on, always ready to reply to anything when summoned, with an “Actually,” followed by his (it’s always a “he”) opinion on what you should be thinking.

And so I built the Actually Bot.

https://www.peterkrupa.lol/wp-content/uploads/2024/05/actually_bot1.pngBasically, the way it works is that any user can reply to any Mastodon post (or I guess any post on the ActivityPub protocol) and tag the Actually Bot (@actuallybot) and it will reply with its most mediocre take on why everyone else is wrong, or at least not quite right.

The reply-guys can all move on to something else now, I have automated them out of a job.

This was a really fun project and has a lot of separate parts. First, I had to modify an LLM to give it the right “personality” for this job. To do this, I used Ollama, a tool for running open source LLMs locally, on your own machine. I used the “customized prompt feature”: You basically write a little file that Ollama uses to tweak the LLM of your choosing to give it certain characteristics. I went with the new Llama3:8b model from Meta as my base model, then put the following instructions in my modelfile:

FROM llama3PARAMETER temperature 3SYSTEM """You are receiving social media posts as prompts, and your job is to reply to these prompts. Please start all your replies with the word "actually". Reply as if you know more than everyone. You must recontextualize every prompt and correct or amend it, informing the prompter that they are wrong, even if ever so slightly. You write in short, sentences. Your replies must be short. Keep them to about three sentences. This is essential: keep your replies short. They must be under 500 characters."""

Then I ran the following command in the console:

ollama create actually_llama -f ./actually_llama

… and my model was ready to roll. Next, I needed a program to connect to the Ollama API to send the LLM prompts and get responses. Python was great for that, as both Ollama and Mastodon have solid Python libraries. Probably the slowest part was picking through Mastodon.py to figure out how the methods work and what exactly they return. It’s a very robust library with a million options, and fortunately it’s also extremely well documented, so while it was slow going, I was able to whack it together without too much trouble.

I’m not going to get into all the code here, but basically, I wrote a simple method that checks mentions, grabs the text of a post and the post it is replying to, and returns them for feeding into the LLM as the prompt.

Despite my very careful, detailed, and repetitive instructions to be sure replies are no more than 500 characters, LLMs can’t count, and they are very verbose, so I had to add a cleanup method that cuts the reply down to under 500 characters. Then I wrote another method for sending that cleaned-up prompt to Ollama and returning the response.

The main body starts off by getting input for the username and password for login, then it launches a while True loop that calls my two functions, checking every 60 seconds to see if there are any mentions and replying to them if there are.

OK it works! Now came the hard part, which was figuring out how to get to 100% uptime. If I want the Actually Bot to reply every time someone mentions it, I need it to be on a machine that is always on, and I was not going to leave my PC on for this (nor did I want it clobbering my GPU when I was in the middle of a game).

So my solution was this little guy:

https://www.peterkrupa.lol/wp-content/uploads/2024/05/lenovo.jpg… a Lenovo ThinkPad with a 3.3GHz quad-core i7 and 8gb of RAM. We got this refurbished machine when the pandemic was just getting going and it was my son’s constant companion for 18 months. It’s nice to be able to put it to work again. I put Ubuntu Linux on it and connected it to the home LAN.

I actually wasn’t even sure it would be able to run Llama3:8b. My workstation has an Nvidia GPU with 12gb of VRAM and it works fine for running modest LLMs locally, but this little laptop is older and not built for gaming and I wasn’t sure how it would handle such a heavy workload.

Fortunately, it worked with no problems. For running a chatbot, waiting 2 minutes for a reply is unacceptable, but for a bot that posts to social media, it’s well within range of what I was shooting for, and it didn’t seem to have any performance issues as far as the quality of the responses either.

The last thing I had to figure out was how to actually run everything from the Lenovo. I suppose I could have copied the Python files and tried to recreate the virtual environment locally, but I hate messing with virtual environments and dependencies, so I turned to the thing everyone says you should use in this situation: Docker.

This was actually great because I’d been wanting to learn how to use Docker for awhile but never had the need. I’d installed it earlier and used it to run the WebUI front end for Ollama, so I had a little bit of an idea how it worked, but the Actually Bot really made me get into its working parts.

So, I wrote a Docker file for my Python app, grabbed all the dependencies and plopped them into a requirements.txt file, and built the Docker image. Then I scr’d the image over to the Lenovo, spun up the container, and boom! The Actually Bot was running!

Well, OK, it wasn’t that simple. I basically had to learn all this stuff from scratch, including the console commands. And once I had the Docker container running, my app couldn’t connect to Ollama because it turns out, because Ollama is a server, I had to launch the container with a flag indicating that it shared the host’s network settings.

Then once I had the Actually Bot running, it kept crashing when people tagged it in a post that wasn’t a reply to another post. So, went back to the code, squashed bug, redeploy container, bug still there because I didn’t redeploy the container correctly. There was some rm, some prune, some struggling with the difference between “import” and “load” and eventually I got everything working.

Currently, the Actually Bot is sitting on two days of uninterrupted uptime with ~70 successful “Actually,” replies, and its little laptop home isn’t even on fire or anything!

Moving forward, I’m going to tweak a few things so I can get better logging and stats on what it’s actually doing so I don’t have to check its posting history on Mastodon. I just realized you can get all the output that a Python script running in a Docker container prints with the command docker logs [CONTAINER], so that’s cool.

The other thing I’d like to do is build more bots. I’m thinking about spinning up my own Mastodon instance on a cheap hosting space and loading it with all kinds of bots talking to each other. See what transpires. If Dead Internet Theory is real, we might as well have fun with it!

https://www.peterkrupa.lol/2024/05/01/actually-building-a-bot-is-fun/

image/jpeg

kellogh,
@kellogh@hachyderm.io avatar

@pjk this is good, but it would be better if @actuallybot dropped in on random conversations uninvited

kjr, to llm
@kjr@babka.social avatar

I am trying to build a RAG with LLAMA 3 and... getting really crazy with the strange formats I get in the response....
Not only the response, but additional text, XML tags...
#Llama3 #LLM #RAG

kjr,
@kjr@babka.social avatar

I realize now that maybe that is a question for @raf

raf,
@raf@babka.social avatar

@kjr

Do you have a desired output format?

simondueckert, to windows German
@simondueckert@colearn.social avatar

#LMStudio ist eine App für #Windows, #Mac und #Linux, mit der ihr Open-Source-Sprachmodelle (LLMs) wie #Llama3, #Mistral, #Phi3 & Co. lokal auf eurem Rechner verwenden könnt: https://lmstudio.ai

Das ist Datenschutz-freundlich und spart Energie - eine GPT-Anfrage verbraucht 15x mehr Energie, als eine Google-Suche).

Der #lernOS KI MOOC ist eine gute Gelegenheit, um neben "klassischen" KI-Tools auch mal offene Varianten auszuprobieren: https://www.meetup.com/de-DE/cogneon/events/297769514/

jamesravey, to ai
@jamesravey@fosstodon.org avatar

I've been investigating whether small #ai models like #llama3 and #phi3 generalise well to bio-medical Q&A use cases. Small models that don't require data centres full of GPUs to run are starting to become competitive with big commercial LLMs. https://brainsteam.co.uk/2024/04/26/can-phi-and-llama-do-biology/

kellogh, to random
@kellogh@hachyderm.io avatar

i asked #llama3 to generate a SVG of a goose being chased by a taxi, because there's no way i'm logging into my facebook in a NAKED BROWSER just to generate pics (hackers be hacking).

i do not condone the results. i guess the yellow thing is a taxi? god my 3yo draws better

kellogh,
@kellogh@hachyderm.io avatar

time to unrealistic expectations about what AI should be able to do: 1.5 years

ErikJonker, to ai
@ErikJonker@mastodon.social avatar

Playing around with https://poe.com/ , seriously thinking about quitting ChatGPTplus (paid service) for this, the flexibility in switching models (Claude, Llama, GPT etc) is amazing, I am wondering what I would miss compared with ChatGPTplus.
#ai #poe #chatgptplus #llama3 #claude

zjuul,

@ErikJonker N.a.v. die link van je gisteren heb ik er lokaal een geïnstalleerd. Dat ging vlot, draait ook soepel. Dank voor de link!
Verder betaal ik voor Gemini, voor als ik soms wat code nodig heb, als vervanger van stackoverflow

ErikJonker,
@ErikJonker@mastodon.social avatar

@zjuul zelf heb ik een betaalde ChatGPTplus account maar twijfel om te switchen naar Poe.com , gaaf hoe je daar allerlei verschillende modellen kunt gebruiken. Zelf op vier jaar oude laptop thuis ook Llama3 geïnstalleerd, werkte wat traag 😀 maar verrassend goed, gaat echt enorme impact hebben, iedereen kan nu modellen zoals genoemde ook lokaal zonder cloud ed integereren in applicaties.

troed, to llm
@troed@ioc.exchange avatar

Asked LLama-3 to implement a CRC32 routine in C. The 8B model.

With the exception of it forgetting to declare the table array, the code compiled without errors.

I also asked it to run the code on a test string, which it did and explained at each step what the intermediate CRC32 was.

Well. The result was wrong. Both when it executed the code itself, as well as when I compiled and ran it ;)

But this would definitely confuse someone who tried to use it for coding. I see nothing wrong with the code - it all looks perfect. If I get the time I might look into why it's not correct.

janhutter, to til

that (7B, Q4 variant) can be downloaded in -Studio and used as AI assistant with my 5+ years old Lenovo notebook (without having a dedicated graphics card).

narrowcode, to ai

The release of Llama3 convinced me to try running a local LLM. I was pleasantly surprised about the performance and how easy it was to set up, so I wrote a blog post about the process:

https://narrowcode.xyz/blog/2024-04-23_taming-llamas_leveraging-local-llms/

#ai #llama3

publicvoit,
@publicvoit@graz.social avatar

@narrowcode Have you tried GPT4All yet?

narrowcode,

@publicvoit I have heard about it before and tried it out right now - performance might even be a little better on my machine.

The RAG features also work quite seamlessly out of the box, which is nice (and in my current setup, I'm having to use a special Obsidian plugin to get RAG for my notes, which is definitely less convenient).

I don't really like its GUI, but that's a minor gripe.

Overall, I think I still slightly prefer Ollama's UX over GPT4All or oobabooga's LLM interface.

joe, (edited ) to programming

Yesterday, we played with Llama 3 using the Ollama CLI client (or REPL). Today I figured that we would play with it using the Ollama API. The Ollama API is documented on their Github repo. Ollama has a client that runs when you run ollama run llama3 and a service that can be accessed from something like MindMac, Amallo, or Enchanted. The service is what starts when you run ollama serve.

In our first Llama 3 post, we asked the model for “a comma-delimited list of cities in Wisconsin with a population over 100,000 people”. Using Postman and the completion API endpoint, you can ask the same thing.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-1.30.48%E2%80%AFPM.png?resize=1024%2C811&ssl=1

You will notice the stream parameter is set to false in the body. If the value is false, the response will be returned as a single response object, rather than a stream of objects. If you are using the API with a web application, you will want to ask the model for the answer as JSON and you will probably want to provide an example of how you want the answer formatted.

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-1.45.15%E2%80%AFPM.png?resize=1024%2C811&ssl=1

You can use Node and Node-fetch to do the same thing.

If you run it from the terminal, it will look like this:

https://i0.wp.com/jws.news/wp-content/uploads/2024/04/Screenshot-2024-04-20-at-2.01.19%E2%80%AFPM.png?resize=1024%2C932&ssl=1

Have any questions, comments, etc? Please feel free to drop a comment, below.

https://jws.news/2024/lets-play-more-with-llama-3/

#AI #Amallo #Enchanted #llama3 #LLM #MindMac #NodeJs #Ollama #Postman

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • Durango
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • lostlight
  • All magazines