Did you know? Our pgedge-vectorizer tool (on GitHub: https://github.com/pgEdge/pgedge-vectorizer) automatically chunks text content and generates vector embeddings with the help of background workers.

OpenAI, Voyage AI, and Ollama are supported as embedding providers, and a simple SQL interface allows you to enable vectorization on any table. (There’s even built-in views and functions for monitoring queue status.)

#github #opensource #semanticsearch #vector #vectordatabase #openai #ollama #voyageai

GitHub - pgEdge/pgedge-vectorizer: A PostgreSQL extension to create chunk tables for existing text data, and populate them with embeddings using your favourite LLM.

A PostgreSQL extension to create chunk tables for existing text data, and populate them with embeddings using your favourite LLM. - pgEdge/pgedge-vectorizer

GitHub

If you've ever wanted to build a #RAG server with #PostgreSQL, now's your chance. Dave Page wrote a 3 part series using OpenAI, Voyage AI, or a local Ollama installation.

💡 Have you given it a try? Did you get a chance to experiment with other LLMs not covered by the article? What were your results? Let us know!

Find it all here:

1️⃣ : https://www.pgedge.com/blog/building-a-rag-server-with-postgresql-part-1-loading-your-content

2️⃣ : https://www.pgedge.com/blog/building-a-rag-server-with-postgresql-part-2-chunking-and-embeddings

3️⃣ : https://www.pgedge.com/blog/building-a-rag-server-with-postgresql-part-3-deploying-your-rag-api

#ollama #llama #postgres #api #openai #voyageai #devops #dev #devcommunity

Building a RAG Server with PostgreSQL - Part 1: Loading Your Content

Retrieval-Augmented Generation (RAG) has become one of the most practical ways to give Large Language Models (LLMs) access to your own data. Rather than fine-tuning a model or hoping it somehow knows about your documentation, RAG lets you retrieve relevant content from your own sources and provide it as context to the LLM at query time. The result is accurate, grounded responses based on your actual content.

#VoyageAI introduces voyage-context-3, a contextualized chunk #embedding #llm that captures both chunk details and full document context 🔍 #ai

🔄 Outperforms #OpenAI-v3-large by 14.24% on chunk-level and 12.56% on document-level retrieval tasks

📊 Beats #Cohere-v4 by 7.89% and 5.64% respectively, and #Jina-v3 late chunking by 23.66% and 6.76%

🛠️ Drop-in replacement for standard embeddings without requiring downstream workflow changes

🧵👇