I was trying to find out what it takes to "comfortably" create a web application with Dioxus and am really happy with it so far.
"Comfortably" meaning using the router and fullstack feature and trying to avoid raw sql as much as possible using SeaORM for persistence to be able to concentrate on domain logic only. The only thing that required me to use raw sql was to get authentication running and that is only a single select-all query on a table.
#dioxus #rustlang #seaorm

https://www.sea-ql.org/SeaORM/docs/generate-entity/entity-first/

>This is where SeaORM shines: it automatically builds a dependency graph from your entities and determines the correct topological order to create the tables, so you don't have to keep track of them in your head.

Awesome. It sounds like #SeaORM might be Rust's equivalent of #EntityFramework from the #dotnet world. I'm fiddling with a local application that stores data in a #sqlite database, and have been looking at various crates that could handle schema and entity management. While SeaORM bills itself as being meant for web applications, I'm not seeing anything that would preclude using it in local apps.

#rustlang

Entity First Workflow | SeaORM ๐Ÿš An async & dynamic ORM for Rust

What's Entity first?

Is there an elegant way to update a field in #SeaORM using a database function? It's also the default for that field (random non-primary ID). I need to return the result of that update and it currently takes three queries to do that. (1. fetch to see if the record exists, 2. update the ID, 3. fetch the updated record and return it)

I re-started one of my projects again today (and deleted 10kloc of the previous code).

I started by implementing the basic model, then added a database persistence layer using #seaorm with #sqlite as backend but I am really slow writing that. I don't know why, but the tooling feels weird to me, especially because you generate files and then you manually edit their content(??? Or am I misunderstanding something?). This feels weird.

Then I started to implement a very basic "just dump the data in one big json file on disk" backend, which was obviously much faster to write.

Next would be the basic network code and the over-the-wire protocol and then the behavior and validation code.

I really hope I get something basic to work and am not again sidetracked ๐Ÿ˜ข which is actually the reason it wasn't in POC state before.

#rust #rustlang

About 3000 LoC changed later and now user data is stored in a database instead of just as JSON in folders #SeaORM

Here I see that SeaORM is just some added burden. I see no benefit in using it for my project.

You need to learn their API (DSL) to query the database.

It names its stuff "entities" and "models" and you have to come up with another name for your business models.

Extremely difficult to work with relation ships. Even harder if you models have multiple or chained relation ships.

#seaorm

After testing both SeaORM & Tokio Postgres:

With both, you still need to manually write
- junction table
- migrations
- mappings to your useful or complex types/structs.

So basicaly you just get almost raw result from a database that is a SQL table representation that you transform to you business models.

#seaorm #postgres #tokioPostgres

Now that I've got the #SeaORM's DatabaseConnection picked up in #rust #warp's filter, I can proudly say that I've got a basic understanding of FnOnce vs FnMut vs Fn.
Finally, I got the std::sync::Mutex vs futures::lock::Mutex, this was the last hurdle to make it work.
#Poem and #SeaORM (sea-orm) is the #Python #FastAPI + #SqlModel equivalent in #Rust. Seems pretty easy to use.
This turned out into an interesting journey, I learned a lot about #seaorm by reading what #loco did with it, how they enhanced it. My current #rust implementation remains purely #seaorm based, but #loco has a lot to offer. Using just the #seaorm layer #loco built seems attractive.