Announcing Flask-SQLAlchemy-Lite, a new lightweight replacement for Flask-SQLAlchemy that provides engine configuration and session lifetime, but none of the other custom stuff in the prior extension. It works with Flask and Quart, sync and async. I figured out the core idea on the flight to PyCon US, teased it during FlaskCon, and now it's available! Check out the docs to get started! https://flask-sqlalchemy-lite.readthedocs.io#Python#Flask#SQLAlchemy
Looking forward to SQLAlchemy 2.1 so I can do ForeignKey(lambda: Other.id) instead of ForeignKey("other.id"), and relationship(back_populates=lambda: Other.things). Last little bit needed to for (non-string) forward references that don't cause circular imports. #Python#SQLAlchemy
Just released Flask-Alembic 3.0! This extension combines Flask and Flask-SQLAlchemy with the Alembic migration library, providing CLI and programatic access to Alembic's functionality. It went 7.5 years without needing a release. This fixes compatibility with Flask-SQLAlchemy 3.1, and generally modernizes the project, tooling, and minimum requirements. https://github.com/davidism/flask-alembic/releases/tag/3.0.0#Python#Flask#SQLAlchemy
I've got a sneak peak of my latest open source project: Paracelsus.
Long story short, I got sick of manually making database diagrams for SQLAlchemy. The data is all there, so why not generate the diagrams directly?
Paracelsus (named after the alchemist who wrote about mermaids) will read your database models and create diagrams in either Mermaid or Dot format. It can also be used to inject diagrams in markdown files as code blocks.
Instead of writing complicated try-catch constructs to #retry things, using auto-retrying function #decorators certainly changed my life to the better.
This is using #SQLAlchemy & the #backoff library https://pypi.org/project/backoff/ to retry generating a unique token, in the rare event that we chose a token that’s already in the database. On an IntegrityError, the function will “call itself” again.
(Event hooks & exponential backoff are available too.)
This #SQLAlchemy question has been in my head these past few days, so I finally decided to post it on the official support channels. If by any chance you know the answer, please let me know!
Currently, we run end-to-end API tests against a sqlite DB, which takes about 2 seconds. I want to switch to postgres (what we use in prod) but that makes the tests take a lot longer, about 12 seconds.
Is there a way to improve this?
What might be causing the slowness? Where can I look? Is it networking to the docker container? Postgres enforcing constraints? Running flush after every DB fixture? Something else?
I've got this data-complex astronomy project I'm working on (pro-bono).
I have a number of different backends I pull from (NASA, Unis,. etc) all who use diff field names for the same thing resulting in lots of tedious (and future error-prone) data parsing and checking code.
Was hoping to use something like pydantic as an in-between translator/mapper of thediff fields to the same schema(s).
any #python reading recommendations to optimize memory use of a dict containing lists which in turn contain lots of small dicts? once build up they can be read-only.
In memory #sqlite is also a good option. I like sqlite because there are great tools like #SQLAlchemy which allows you to have #ORM features, but if it is simple I would just go with the built-in libraries to improve performance.
I also like sqlite with #Docker, as I have the simplicity of a sqlite database, but I can easily share it with containers with mounts and persist the data during development and production.
#VScode also has greay extensions for viewing sqlite databases, making it a glorified csv with advanced query capabilities.
At first, I didn’t like type hints in #Python, but we decided to give it a go since our codebase really exploded in the last couple of years. All I can say now is we should have done it earlier. I still find it unbealivable that we discovered so many small bugs that went unnoticed all these years.
I am quite lazy with them and whoch they were more automatically detected, especially for diciotnaries. During development, my dictionaries often change, and I get lazy. Once my code base grows, it is very frustrating to have to go back!
But I have to say, with something liek #SQLAlchemy. Their type hints really make development easier!
Evaluating non-mapped column expression 'run_updates.run_id' onto ORM instances; this is a deprecated use case. Please make use of the actual mapped columns in ORM-evaluated UPDATE / DELETE expressions.
Gathering counts out of band and wanting to update counts of multiple rows at once.
What is your favorite #Python#ORM for #SQL and #SQLite? I am currently looking for an ORM judt to sinplify my implementation.
I am considering going for SQLite because the writes and reads are low, and the total size will also be a few 100 records, but I want the strengths of SQL. Also, I am running it in #Docker, so it simplifies the deployment.
@Stark9837 for a small project it's not that important. both #sqlAlchemy and #peewee will do the work for you. The limitation will be on the SQLite side
I am aware of the limitations, but I am far below them. I want to project to be easily deployable by others and then the usage will also be low enough for it to suffice.
#Peewee feels a lot like the web-development #ORM that you get for #React and #JS, but I like #SQLAlchemy, the docs also seem good.
I prefer to use something more popular, that the support is better. The docs of #SQLAlchemy are a bit odd. I found myself getting stuck after the tutorial, like you said. It seems 2.0 documentation isn't the best with the new declarative style, but I found some other resources, and I am diving into code I found on #Github to see how others did it.
#Peewee is cool, but the names like that always turn me off. I need serious names for libraries I use, I don't know. But I can see that it had great influence from modern #JS#ORM styles, so it might be better for people coming from that.
I haven't used async and wait with #Python yet. So this will also be a fun way to get into it with #SQLAlchemy. What I like about it is that I could also easily move away from #SQLite if I need to, and #PostgreSQL has great #Docker support, so I would need to change much.
I'm designing a library that provides a core set of functionality, then provides integrations with SQLAlchemy and Flask. Those specific integrations are the reason I wrote the library, but other integrations could be written around the same core, and core can be used without any integration. Should I split core and integrations into separate libraries? #Python