ramikrispin, to datascience
@ramikrispin@mstdn.social avatar

In the past few months, I created a bunch of Docker 🐳 tutorials covering random topics, from a fun setting for a Python 🐍 environment on the CLI to advanced topics such as multi-stage builds 🏗️. I organized all the tutorials under one folder, and I plan to keep updating this folder with future-related ones 😎.

Currently on my Docker tutorial TODO list:
➡️ Docker ENTRYPOINT vs CMD
➡️ Docker multi-architecture build

🔗 https://medium.com/@rami.krispin/list/docker-21408ce79e6a

Enjoy!

gamey, to random German
@gamey@chaos.social avatar

Is there any good Blog post to learn customization with relatively limited knowledge about ? I don't like the idea of a distro that I can't customize easily so normal doesn't sound like my favorite choice but uBlue, and similar approaches sound very tempting and I love the concept of distros in general! Also, do you think it's easy enough to learn to imediately switch my main computer over or should I do some more testing in VMs first?

ramikrispin, to vscode
@ramikrispin@mstdn.social avatar

Getting started with the Dev Containers extension 🚀👇🏼

The Dev Containers extension is the main reason I moved to VScode, as it provides a native and seamless integration of Docker 🐳. I started to work on a sequence of tutorials focusing on the VScode Dev Containers extension. The first tutorial on the sequence focuses on getting started with the Dev Containers extension;

🔗: https://medium.com/towards-data-science/getting-started-with-the-dev-containers-extension-a5ea49abfc34

geekymalcolm, to random
@geekymalcolm@ioc.exchange avatar

Nice to see my snowflake docker container being useful to some people!

dave, to random
@dave@puz.fun avatar

Anyone here have experience updating a Dockerized Perl application? Specifically I am looking for help upgrading the app from 5.26 to 5.30 and how to rectify errors where some libraries (I think?) were built for Perl 5.26 and won't work under 5.30?

#Perl #ModernPerl #PerlDev #Docker #Perl5

joe, to random

Yesterday, I wrote about how I moved a mastodon bot from Pipedream to a docker container. Docker is an efficient way of running isolated little scripts like that. Today, I wanted to review some basic debugging techniques to ensure your script runs as expected.

What docker images exist on the system?

When we looked at how to dockerize a node app, I said that you create a docker image and then run it as a container. So, how do you list the docker images on a system? You run docker images.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.37.35%E2%80%AFPM.png?resize=1024%2C856&ssl=1

What docker containers exist on the system?

If you run docker ps, you can get what containers are running, and if you run docker ps -a, it will include containers that aren’t running.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.46.33%E2%80%AFPM.png?resize=1024%2C856&ssl=1

How do you access a container’s shell?

Like a VM or a system running on bare metal, you can get a shell inside of the docker container. The first step is knowing the container ID for the container you want a shell for. If you look at the output from the docker ps command, you can find it.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-12.46.33%E2%80%AFPM-2.png?resize=1024%2C856&ssl=1

At this point, you run docker exec -it [container id] /bin/sh to get a shell inside the container.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-04-at-1.12.14%E2%80%AFPM.png?resize=1024%2C856&ssl=1

Once you know that the image is there, know if it is running or not, and have a shell inside the container, you should be able to find what is wrong with your container.

Have a questions, comment, etc? Feel free to drop a comment, below.

https://jws.news/2024/debugging-a-docker-container/

wyri, to random
@wyri@haxim.us avatar

Another reason to run things such as #RabbitMQ as a cluster: If the #Docker image is missing the #arm64 arch one pod will stay pending while the continues working

joe, to mastodon

Back in 2022, I created “Good Morning, Milwaukee!“. It is a bot that posts every day at 6 am with the weather, the times for sunrise and sunset, and a photo from around the city. When I first wrote it, I wrote it in Node and put it up on Pipedream. Lately, there have been some issues with the weather API that it was using, so I decided to replace it with the OpenWeather API but I figured that while I was at it, I would rewrite it in Python, dockerize it, and run it on my new home lab server.

Let’s start with what the actual Python script looks like.

If you want to reuse this code to create your own bot, there are variables at the top for api_key, zip_code, and mastodon_access_token. The actual posting is done using Mastodon.py.

So, what would the Dockerfile look like?

You’ll notice that it also needs a requirements.txt and a crontab file. Lets see what those look like.

Just make sure that you have a newline at the end of your crontab file. At this point, you can run docker build -t gmmke-app . to build the docker image and then run docker run -d gmmke-app run the container.

https://i0.wp.com/jws.news/wp-content/uploads/2024/05/Screenshot-2024-05-03-at-3.24.33%E2%80%AFPM.png?resize=1024%2C856&ssl=1

With that, it is going to post once when you create the container and then daily at 6:00 AM (Milwaukee time).

Have any questions, comments, etc? Feel free to drop them, below.

https://jws.news/2024/i-rewrote-good-morning-milwaukee-in-python/

governa, to random
@governa@fosstodon.org avatar

How to Install Tiny Tiny RSS Using #Docker on PC (Ultimate Guide) :rss:

https://linuxtldr.com/tiny-tiny-rss/

ramikrispin, to random
@ramikrispin@mstdn.social avatar

Another great reason why R users should use Docker 🐳 - Airflow 😎

pandoc, to random
@pandoc@fosstodon.org avatar

The #pandoc #Docker images had been experiencing some bit-rot, but have been updated and are back in service now. The images include the latest release (3.1.13) and the current development version. They continue to be available in the four flavors minimal, core, latex, and extra.
#TeXLaTeX images now ship with #TeXLive 2024.

https://hub.docker.com/r/pandoc/minimal
https://hub.docker.com/r/pandoc/core
https://hub.docker.com/r/pandoc/latex
https://hub.docker.com/r/pandoc/extra

linuxiac, to ubuntu
@linuxiac@mastodon.social avatar

Install Docker effortlessly on Ubuntu 24.04 LTS (Noble Numbat) with our expert, easy-to-follow guide. Perfect for beginners and pros alike.
https://linuxiac.com/how-to-install-docker-on-ubuntu-24-04-lts/

sirber, to php
@sirber@fosstodon.org avatar

I started a prototype of a framework-less architecture, using , on my playground. Separated public and private files, enabled autoloading using composer and PSR-4. It's fun! 😀

https://gitlab.com/sirber/playground/-/tree/main/php/pure?ref_type=heads

tcurdt, to NixOS
@tcurdt@mastodon.social avatar

After using NixOS, the whole container ecosystem feels like holding it wrong.

I can no longer un-see it 🫣
I am doomed.

gnulinux, to linux German
@gnulinux@social.anoxinon.de avatar

Tutorial Self-hosting mit Docker-Compose

Ein Leitfaden zum Hosten verschiedener Dienste mit Docker-Compose

https://gnulinux.ch/tutorial-self-hosting-mit-docker-compose

rladies_bergen, to programming
@rladies_bergen@hachyderm.io avatar

It's May already! Let's do something fresh and learn about how to use containers with your projects!
RSVP here:
https://www.meetup.com/rladies-bergen/events/300711368/

webology, to django
@webology@mastodon.social avatar

For any #Django + #Docker #Compose users out there, I was struggling for several days because template and code changes were only being picked up once per session and then cached to infinity.

I switched from #Orbstack back to vanilla Docker and that problem went away.

So I think they might have a bug or something else that's non-obvious. I checked and didn't see an issue logged yet.

I like Orbstack a lot, but what a frustrating bug to fight.

governa, to random
@governa@fosstodon.org avatar

Attackers Planted Millions of Imageless Repositories on #Docker Hub :docker:

The purported metadata for each these containers had embedded links to malicious files.

https://www.darkreading.com/cyber-risk/attackers-planted-millions-of-imageless-repositories-on-docker-hub

89luca89, to opensource
@89luca89@fosstodon.org avatar

Hi all!

Glad to announce release 1.7.2 of

Many bugfixes, and a couple of behavioural improvements that will resolve lots of future issues!

Take a look at the changelog here!

https://github.com/89luca89/distrobox/releases/tag/1.7.2.0

whydoesnothingwork, to linux
@whydoesnothingwork@mastodon.social avatar
pjk, to python
@pjk@www.peterkrupa.lol avatar

One thing you notice right away about LLMs is they bear a striking resemblance to that ubiquitous internet character, the reply-guy: they always have an answer, they are brimming with confidence, and they are often completely wrong.

It seemed only natural, then, to turn one into a full-time online version of itself, RGaaS (Reply-Guy as a Service), always on, always ready to reply to anything when summoned, with an “Actually,” followed by his (it’s always a “he”) opinion on what you should be thinking.

And so I built the Actually Bot.

https://www.peterkrupa.lol/wp-content/uploads/2024/05/actually_bot1.pngBasically, the way it works is that any user can reply to any Mastodon post (or I guess any post on the ActivityPub protocol) and tag the Actually Bot (@actuallybot) and it will reply with its most mediocre take on why everyone else is wrong, or at least not quite right.

The reply-guys can all move on to something else now, I have automated them out of a job.

This was a really fun project and has a lot of separate parts. First, I had to modify an LLM to give it the right “personality” for this job. To do this, I used Ollama, a tool for running open source LLMs locally, on your own machine. I used the “customized prompt feature”: You basically write a little file that Ollama uses to tweak the LLM of your choosing to give it certain characteristics. I went with the new Llama3:8b model from Meta as my base model, then put the following instructions in my modelfile:

FROM llama3PARAMETER temperature 3SYSTEM """You are receiving social media posts as prompts, and your job is to reply to these prompts. Please start all your replies with the word "actually". Reply as if you know more than everyone. You must recontextualize every prompt and correct or amend it, informing the prompter that they are wrong, even if ever so slightly. You write in short, sentences. Your replies must be short. Keep them to about three sentences. This is essential: keep your replies short. They must be under 500 characters."""

Then I ran the following command in the console:

ollama create actually_llama -f ./actually_llama

… and my model was ready to roll. Next, I needed a program to connect to the Ollama API to send the LLM prompts and get responses. Python was great for that, as both Ollama and Mastodon have solid Python libraries. Probably the slowest part was picking through Mastodon.py to figure out how the methods work and what exactly they return. It’s a very robust library with a million options, and fortunately it’s also extremely well documented, so while it was slow going, I was able to whack it together without too much trouble.

I’m not going to get into all the code here, but basically, I wrote a simple method that checks mentions, grabs the text of a post and the post it is replying to, and returns them for feeding into the LLM as the prompt.

Despite my very careful, detailed, and repetitive instructions to be sure replies are no more than 500 characters, LLMs can’t count, and they are very verbose, so I had to add a cleanup method that cuts the reply down to under 500 characters. Then I wrote another method for sending that cleaned-up prompt to Ollama and returning the response.

The main body starts off by getting input for the username and password for login, then it launches a while True loop that calls my two functions, checking every 60 seconds to see if there are any mentions and replying to them if there are.

OK it works! Now came the hard part, which was figuring out how to get to 100% uptime. If I want the Actually Bot to reply every time someone mentions it, I need it to be on a machine that is always on, and I was not going to leave my PC on for this (nor did I want it clobbering my GPU when I was in the middle of a game).

So my solution was this little guy:

https://www.peterkrupa.lol/wp-content/uploads/2024/05/lenovo.jpg… a Lenovo ThinkPad with a 3.3GHz quad-core i7 and 8gb of RAM. We got this refurbished machine when the pandemic was just getting going and it was my son’s constant companion for 18 months. It’s nice to be able to put it to work again. I put Ubuntu Linux on it and connected it to the home LAN.

I actually wasn’t even sure it would be able to run Llama3:8b. My workstation has an Nvidia GPU with 12gb of VRAM and it works fine for running modest LLMs locally, but this little laptop is older and not built for gaming and I wasn’t sure how it would handle such a heavy workload.

Fortunately, it worked with no problems. For running a chatbot, waiting 2 minutes for a reply is unacceptable, but for a bot that posts to social media, it’s well within range of what I was shooting for, and it didn’t seem to have any performance issues as far as the quality of the responses either.

The last thing I had to figure out was how to actually run everything from the Lenovo. I suppose I could have copied the Python files and tried to recreate the virtual environment locally, but I hate messing with virtual environments and dependencies, so I turned to the thing everyone says you should use in this situation: Docker.

This was actually great because I’d been wanting to learn how to use Docker for awhile but never had the need. I’d installed it earlier and used it to run the WebUI front end for Ollama, so I had a little bit of an idea how it worked, but the Actually Bot really made me get into its working parts.

So, I wrote a Docker file for my Python app, grabbed all the dependencies and plopped them into a requirements.txt file, and built the Docker image. Then I scr’d the image over to the Lenovo, spun up the container, and boom! The Actually Bot was running!

Well, OK, it wasn’t that simple. I basically had to learn all this stuff from scratch, including the console commands. And once I had the Docker container running, my app couldn’t connect to Ollama because it turns out, because Ollama is a server, I had to launch the container with a flag indicating that it shared the host’s network settings.

Then once I had the Actually Bot running, it kept crashing when people tagged it in a post that wasn’t a reply to another post. So, went back to the code, squashed bug, redeploy container, bug still there because I didn’t redeploy the container correctly. There was some rm, some prune, some struggling with the difference between “import” and “load” and eventually I got everything working.

Currently, the Actually Bot is sitting on two days of uninterrupted uptime with ~70 successful “Actually,” replies, and its little laptop home isn’t even on fire or anything!

Moving forward, I’m going to tweak a few things so I can get better logging and stats on what it’s actually doing so I don’t have to check its posting history on Mastodon. I just realized you can get all the output that a Python script running in a Docker container prints with the command docker logs [CONTAINER], so that’s cool.

The other thing I’d like to do is build more bots. I’m thinking about spinning up my own Mastodon instance on a cheap hosting space and loading it with all kinds of bots talking to each other. See what transpires. If Dead Internet Theory is real, we might as well have fun with it!

https://www.peterkrupa.lol/2024/05/01/actually-building-a-bot-is-fun/

#Docker #Llama3 #Ollama #Python

image/jpeg

dotnet, to dotnet
@dotnet@dotnet.social avatar

🔐Secure your container build and publish with .NET 8

.NET 8 has new security features for containers, including non-root images and SDK tools. Discover how to create non-root container images, configure Kubernetes pods, and inspect images and containers for enhanced security.

https://devblogs.microsoft.com/dotnet/secure-your-container-build-and-publish-with-dotnet-8/

#dotnet #docker

mttaggart, to Cybersecurity

Okay 20% of repos is...high.

Our research reveals that nearly 20% of these public repositories (almost three million repositories!) actually hosted malicious content. The content ranged from simple spam that promotes pirated content, to extremely malicious entities such as malware and phishing sites, uploaded by automatically generated accounts.

jfrog.com/blog/attacks-on-docker-with-millions-of-malicious-repositories-spread-malware-and-phishing-scams/

kubikpixel, to webdev
@kubikpixel@chaos.social avatar

Buah-eh... until the TypeScript ran the way I had to have it for WebComponents it had taken me forever to search for libraries and I hadn't even started writing the code tests yet… 🤦‍♂️🤷‍♂️

kubikpixel,
@kubikpixel@chaos.social avatar

»Millions of Malicious 'Imageless' Containers Planted on Docker Hub Over 5 Years«

I hope, I'm more secure with @Podman_io and don't must have fear.

🐋 https://thehackernews.com/2024/04/millions-of-malicious-imageless.html


#webdev #docker #itsecurity #imageless #containers #podman #longtime #web #it

Datenproletarier, to debian German
@Datenproletarier@chaos.social avatar

The more you know, man kann Docker einfach neu installieren und alle Container bleiben bestehen!

Ich hatte Docker auf Debian über 'apt install docker.io' installiert, damit bekommt man aber eine alte, inoffizielle Version (v20.10.24).

Nun bin ich mutig der offiziellen Anleitung gefolgt, habe alle alten Pakete, die zu Docker gehören entfernt und stattdessen 'docker-ce' installiert (v26.1.0).

Und siehe da, alle Container laufen noch/wieder.

https://docs.docker.com/engine/install/debian/

#Docker #Debian #Linux

  • All
  • Subscribed
  • Moderated
  • Favorites
  • Leos
  • cubers
  • magazineikmin
  • everett
  • thenastyranch
  • Youngstown
  • slotface
  • hgfsjryuu7
  • ngwrru68w68
  • rosin
  • kavyap
  • khanakhh
  • PowerRangers
  • DreamBathrooms
  • anitta
  • mdbf
  • InstantRegret
  • ethstaker
  • Durango
  • osvaldo12
  • tacticalgear
  • vwfavf
  • tester
  • GTA5RPClips
  • cisconetworking
  • modclub
  • normalnudes
  • provamag3
  • All magazines