Did I mention that it would be really nice to have #PostgreSQL as an embeddable library similar to #SQLite? Size wise that should be fine, I think the whole PG daemon is just ~5MB.
I know that the architecture doesn't lean to it, but someone has to do this eventually! 🙂
#Windows' upcoming #Recall feature is enabled by default btw in the OOBE setup and CANNOT be disabled directly - you will need to disable it in the Settings app instead, afterwards.
I don't know if you'd trust #Microsoft to actually respect that setting either, wouldn't be the first and only setting on Windows that doesn't actually do anything you'd want it to (or not to) do.
OH and also - Microsoft stores all the Recall data in an #SQLite database in plaintext, so you KNOW it's secure and definitely not easy to steal away from you by anyone that gains physical or remote access to your device <3
I have a table posts (#SQLite), with a string repo (user DID) and int thread_id. Thread_id references the thread root post, around 1/6 of posts have thread_id NULL, the rest are replies. There's an index on repo+time and one on thread_id.
I want to select recent posts from one or more users that have thread_id NULL. But this very smart query planner uses the thread_id index to find posts, even though that will make it check 1/6 of all posts…
Did you know that you can have a #SQLite CLI config file? Pretty handy for changing the default memory limit, output format, or even print a nice banner.
It allows SQLite to use up to 1 GiB of page cache, uses nice Unicode box drawing characters, and tracks the runtime of queries by default. It also reminds me of how to turn these things off again.
(Pro tip: Set .timer on last, else it will print the stats for each of the start-up commands … 😬)
I released a new version of my #GoLang module for persistent message queues based on #SQLite today. I added a basic job runner, since that’s one of the things I most use a queue for. Check it out at https://www.goqite.com
Richard Hipp, developer of #SQLite, on a podcast, talking about open contributions in FOSS: "They come to you and say, “Well, I've got this cool feature. Why don't you just take it? I don't understand.” What they're really saying to me is, “Hey, I made this hack that's useful to me. Why don't you take it, test it, and maintain it for me for 25 years for free?”"
@vwbusguy
Suggestion:
import sqlite3 and use the SQLite date/time functions. No database required. It will be faster than shelling out to bash al the time.
Very nice tool for converting an #SQLite database into #MySQL: https://github.com/techouse/sqlite3-to-mysql. I'm planning to experiment with switching from SQLite to MySQL or Postgres (and maybe DuckDB?) for the feeds database and see how it behaves and how the performance differs… 🤔
It is a bit unfortunate that one can't backup to and restore a #SQlite database from a byte buffer. Otherwise one could also use it to send serialized data over the network 🙈
Okay, if you have 250 GB of disk space available, and a few gigs of RAM, #KomootPhoton is impressive. A query like localhost:2322/api?q=rathausstr+17, with no further qualifiers, bounding box, or whatever, returned after 1048 ms.
That's on my laptop. Searching over all of the addresses on the globe.
So, uhm, not to dunk on Photon, but I've just built a proof of concept #SQLite database containing all of the addresses in OpenStreetMap and created a full-text search index over them.
A query like "give me all OSM objects where any of the address fields starts with 'Frankf'" takes 36 ms and uses 11 MB of RAM.
That's a FTS5 trigraph search over 13.5 million unique strings and then joining them with nodes & ways.
The database takes 25 GB, compared to Photon's 230 GB.
I have a Sqlite database with 1 table. The table has 30 entries.
I feel like figuring out how much faster the Sqlite version is, with reads, updates and inserts, vs a text file.
@jamesog thanks for sharing this! I’m going to have to play with it. I have some complex #JSON cases that might really benefit. I also see promise in cases where today I might translate #XLSX to #CSV and then import into #SQLite. Why do all that if I can query the original directly? Awesome!
And to start of Monday, we have "This database engine does not support fulltext search operations.” when running local tests in sqlite, even though we have verified that fts5 is enabled.