
Build a RAG Application on the AI Stack of the Future | LinkedIn
Join us for another live coding session and learn how to implement retrieval augmented generation (RAG) globally—powered by the RAG-stack of the future: Quadrant + Seaplane + Vultr.
In this event, we are building two globally deployed pipelines that form the basis for any RAG application.
The first pipeline consists of a processing pipeline that takes in a PDF or text-based format and turns that knowledge base into vector embeddings stored in a Qdrant vector store.
The second pipeline uses the vector embeddings combined with a low-latency LLM (Zephyr-7b) deployed on an edge GPU on the Vultr to answer users' input questions based on the indexed knowledge base.
Join us to learn all the ins and outs of building RAG-powered applications, and make sure to stay till the end to claim 30 days of free usage of Seaplane, including LLMs and Vector Store storage!
About the hosts
Qdrant is the market-leading vector database transforming how your knowledge can be used in modern AI-infused applications
Vultr offers a straightforward and affordable cloud infrastructure solution, making it easy for developers to deploy server instances on the edge worldwide.
Seaplane is pioneering a new platform for radically simplifying the deployment of AI-infused applications globally. With a few lines of Python, you can deploy your app on region earth.