ah shit my attempt to tape text embedding vector search to the side of GTS is actually sort of working. currently prototyping with PGVector and local Ollama running EmbeddingGemma. creating embeddings and indexing them at a few hundred posts per second is using essentially none of my M1 laptop's CPU.

the prototype is probably flexible enough to switch to something even more basic like Word2Vec or GloVe for the low end of GTS deployments. figuring out how to get the sqlite-vec extension into GTS WASM SQLite is left as an exercise to the reader.

really i'm just messing around here as i get back into coding for fun, but this could be the start of semantic search, or a custom feed where you give it a list of exemplar posts and it shows you new ones that come in close to one of them.

GitHub - pgvector/pgvector: Open-source vector similarity search for Postgres

Open-source vector similarity search for Postgres. Contribute to pgvector/pgvector development by creating an account on GitHub.

GitHub

i'm about to describe some pie in the sky but: what if a relay could do expensive processing like calculating standardized post text and image embeddings (or even just fetching link preview cards), and then consumers that decide to trust that relay could skip recomputing/refetching all that stuff, so they'd only need to calc query embeddings locally (and local posts obvi). some guy could put an old gaming PC in his garage and then hundreds of Fedi servers could do less work.

how's that Mastodon thing for "Fediverse providers" going anyway

also why aren't we using torrents for post media. did people forget torrents exist again

Fediverse Discovery Providers

A project exploring better search and discovery on the Fediverse as an optional, decentralized and pluggable service.

Fediverse Discovery Providers
gotta go faster… i started the migration to create embeddings for my 2.7M existing posts last night 14 hours ago and it's only done about 1.0M of them since
my thing yesterday was learning the ort API (it's the ONNX Runtime wrapper for Rust). and since i don't know ONNX yet either, it's gonna be my thing tomorrow too
wrapping up the first prototype GTS version of this tonight. there's something here, but a lot of the specifics are fussy, and i think going fully out of process, including storage, indexing, tokenization, etc. will be the way to go.

@vyr if you're doing it in rust, yeah, but I think you can do it in a fairly small amount of C code in-process. what I'm thinking is byte pair encoding (I already have a pure C library I wrote for that) -> token vectors (I generated those yesterday) -> CNN autoencoder embedding (I have that training right now) -> gaussian random projection -> morton codes -> LMDB. the neural network code is copy/paste from darknet (which is also plain C with no deps)

the big advantage of doing random projection and morton codes in a b-tree index like that (instead of something like HNSW) is adding posts to the index are just b-tree inserts. writes are fast and there's no need to re-build indexes.

you need cgo to build but there are no dependencies and no external build process so as a go library it should "just work"

@bob @vyr do you have any non-/academic literature for that, out of curiosity
@kouhai @vyr the scikit-learn docs do a good job of describing random projection, but apart from that just Wikipedia