If Claude had actual intelligence, it would have used #LMDB everywhere - for the MCP server https://github.com/nibzard/lmdb-mcp and for the search indexer too. https://mastodon.social/@hyc/116213877453429083

"I built an MCP server that connects Claude Code to 2M+ research papers - it stops defaulting to what it "knows" and starts using published, benchmarked and latest methods.

How Claude Code built it:

Embedding pipeline: Qwen3-Embedding on AWS g5 instances, USearch HNSW index, #LMDB cache for 2M+ CS papers

FastAPI MCP server with multi-query generation and synthesis

Elasticsearch BM25 indexing across the full corpus"

https://www.reddit.com/r/ClaudeAI/comments/1ruqw21/i_built_an_mcp_server_that_connects_claude_code/

redb is a pure #rust embedded DB inspired by #LMDB but experienced programmers prefer the original. Convenient juxtaposition of these two search results...
I'm far more impressed by #LMDB than any LLM.

I Ditched Elasticsearch for Meilisearch. Here's What Nobody Tells You.

https://www.anisafifi.com/en/blog/i-ditched-elasticsearch-for-meilisearch-heres-what-nobody-tells-you/

I spent one afternoon replacing it with Meilisearch. My p99 search latency went from 180ms to 12ms. My infrastructure bill dropped to $14/month (from $120). My configuration went from 340 lines of YAML to about 30.

Meilisearch stores everything in memory-mapped files using #LMDB a high-performance embedded key-value store originally built for the OpenLDAP project. This is why search is fast

I Ditched Elasticsearch for Meilisearch. Here's What Nobody Tells You.

For three years I ran Elasticsearch on a side project that had exactly 200,000 documents and about 800 daily active users. Three years. One cluster. Two nodes.

Anis Afifi
"We used the large variant of DINOv2 to extract 1024-dimensional per-frame features, using the official implementation5. The extracted features were organized into an #LMDB structure for efficient storage and retrieval."

Datum Cloud Authoritative DNS Service: Research Report

https://gist.github.com/drewr/2055b97dc8e7518cc298e58413688459

"The standard problem with #LMDB in multi-pod deployments is that it cannot be shared across nodes."

The entire reason you use containers is for isolation. Make up your mind, do you want sharing, or don't you?

You can share LMDB between containers, using a few non-default settings. Mainly, containers must share PID namespace, instead of always running as their own PID 1.

Of course Oracle kind of did it to themselves too, when they changed the BDB license to AGPLv3 in 2013. This prompted Debian to look for alternatives, and #LMDB emerged as the only suitable candidate.
https://lwn.net/Articles/558154/

A bonus from modeling LMDB on the BDB API - we did this to ease development of back-mdb based on back-bdb. But that also meant it was easy for every other project using BDB to migrate too. And after these licensing games, they were eager to migrate so LMDB use exploded.

Re: Berkeley DB 6.0 license change to AGPLv3

From: Dan Shearer <dan-AT-shearer.org> To: debian developers < [...]

LWN.net
You can well imagine how satisfying it's been to see, some years later, that Oracle has deprecated their own use of BerkeleyDB in favor of #LMDB https://mastodon.social/@hyc/115984080628049520

Then Oracle bought Sun and axed the project. The rug was pulled out from under us, and back-ndb was killed.

https://en.wikipedia.org/wiki/Acquisition_of_Sun_Microsystems_by_Oracle_Corporation

So by 2010 we were pretty sick of Oracle messing up our plans. Developing #LMDB wasn't just a technology experiment, it was a strategic step in getting away from Oracle's influence.

Acquisition of Sun Microsystems by Oracle Corporation - Wikipedia