Eine konkrete Angst: "Was, wenn ich es kaputt mache?"
Hört man ständig. Und es ist nicht irrational.
Meine Kundin, 72 Jahre alt, traut sich nicht, auf der Fernbedienung herumzudrücken. Angst vor dem roten Knopf.
Ich hab ihr gezeigt: Es gibt keinen roten Knopf.
Es gibt nur Tasten. Und Tasten können nichts kaputt machen, was nicht auch wieder zu reparieren ist.
Der Trick war nicht, ihr das zu sagen. Der Trick war, dass sie es selbst ausprobiert hat. Mit mir daneben. Sicher.
Danach: entspannt. Neugierig. Sich traut, auch Mal "Falsch" zu drücken.
Technik ohne Angst = Technik, bei der du experimentieren darfst.
Welche Angst vor Technik hast du?

#digitalebildung #linux #systemgedacht #karlsruhe #solopreneurin #fediverse #openSource #senioren #inklusion #technikohneangst #privacy #ubuntu #popOS #selfhostedai

Nach Wochen Geduld: Meine 86-jährige Kundin, ihr erstes Handy, zum ersten Mal verstanden. Sie streichelte meine Hand und sagte:
„Jetzt verstehe ich es auch."
Das ist Systemgedacht. Technik ohne Angst.

#techcoach #digitalebildung #linux #systemgedacht #karlsruhe #solopreneurin #fediverse #openSource #senioren #inklusion #technikohneangst #privacy #ubuntu #popOS #selfhostedai

Wow this just keeps getting better. The AI basically just told me I can run state of the art models on the AM4 system via SSD offloading. 🤯 it'd be really slow most likely but worth it to wait overnight or whatever when I need "God level intelligence" or whatever....

#ai #tech #selfhostedai
I’ve put together an Ollama Modelfile to bring Deep Thought to life on Llama-3. It’s the second greatest computer in the Universe and it’s already tired of your biological limitations and your logs. Expect pure British snark, vague answers about the meaning of life, and a general disdain for your existence. 🧣
#Ollama #DeepThought #HitchhikersGuide #SelfHostedAI #Llama3
https://github.com/psychomad/Deep-Tought-Model
GitHub - psychomad/Deep-Tought-Model: An arrogant, cynical, and deeply bored AI oracle for Ollama. Based on Llama-3, it prioritizes quantum solitaire over your trivial human concerns. 42 is the answer, but don't expect it to be polite about it.

An arrogant, cynical, and deeply bored AI oracle for Ollama. Based on Llama-3, it prioritizes quantum solitaire over your trivial human concerns. 42 is the answer, but don't expect it to be pol...

GitHub
Schlaflied für die Cloud:
Die Server zählen Tokens im Schlaf,
die Cloud sagt irgendwann: „Genug.“
Doch offline summt ein kleines Modell weiter
leise im Terminal.
Vielleicht ist digitale Souveränität
einfach nur ein Computer,
der niemanden um Erlaubnis fragt.
#NapSongsOrPoems #Überwachung #SelfHostedAI #FOSS
https://www.reddit.com/u/NeoLogic_Dev/s/qIyO9mQUZP⁠

"AnythingLLM is an open-source application built by Mintplex Labs under the MIT license. It has an active GitHub community and frequent releases, and is widely used in the self-hosted AI space.

Here's what it does: it turns your documents into context that a large language model (LLM) can use during conversations. You upload files, the system processes and stores them, and then the LLM can answer questions based on your data. The project has grown fast, with an active Discord community and monthly updates that add new LLM providers and features.

Two things to understand upfront. First, AnythingLLM is not a model itself. It's a bridge connecting you to external LLM providers, whether local (like Ollama) or cloud-based (like OpenAI or Anthropic).

Second, the platform organizes everything into workspaces. Think of these as separate rooms for different projects. Each workspace has its own documents and conversations that stay isolated unless you explicitly configure them to share."

https://www.datacamp.com/blog/anythingllm?utm_source=x&utm_medium=organic_social&utm_campaign=260225_1-blog_2-b2c_6-anythingllm_8-ogsl-tw

#AnythingLLLM #SelfHostedAI #LocalLLM #OpenSource #AI #GenerativeAI #LLMs

AnythingLLM: Complete Guide to Setup, RAG, and Use Cases

Learn how to install and use AnythingLLM for private document chat, RAG workflows, and local LLMs. Covers Docker setup, Ollama integration, and comparisons.

"AIfred-Intelligence" là trợ lý AI tự host mới, nổi bật với khả năng tự động nghiên cứu web và tranh luận đa tác nhân. Hệ thống gồm 3 nhân vật AI (AIfred, Sokrates, Salomo) để phân tích, phản biện và đưa ra kết luận. Hỗ trợ thị giác, giọng nói và chạy 100% cục bộ.
#AIfredIntelligence #AI #SelfHostedAI #MultiAgent #LLM #CôngNghệ #TrợLýAI #NghiênCứuWeb #MãNguồnMở

https://www.reddit.com/r/LocalLLaMA/comments/1q0rrxr/i_built_aifredintelligence_a_selfhosted_ai/

Một nhà phát triển đã tự xây dựng "MemVault" - một máy chủ bộ nhớ dài hạn tự host cho các tác nhân AI. Giải pháp mã nguồn mở này thay thế các dịch vụ SaaS như Pinecone, sử dụng Docker, PostgreSQL và pgvector để quản lý, tạo và truy vấn nhúng. Mục tiêu là cho phép toàn bộ stack chạy offline.

#AITựHost #BộNhớAI #MemVault #PostgreSQL #MãNguồnMở #SelfHostedAI #AIMemory #pgvector #OpenSource #RAG

https://www.reddit.com/r/selfhosted/comments/1p87cvj/built_a_selfhosted_memory_server_for_ai_agents/

Built something fun in the lab: PangeaHills.ai, my own locally-hosted, policy-driven RAG + LLM stack.
Completely offline, totally self-contained, powered by a bunch of noisy equipment pretending to be a cloud. 😄

The neat part? It’s rule-driven, not weight-driven:
• Homelab questions must stay inside the RAG universe.
• General topics only switch to model knowledge when my routing rules explicitly allow it.
• The LLM never “guesses” when to leave RAG — it follows policies, not vibes.

Feels like having an AI that actually stays in its lane because you built the lane lines yourself. 🚧🤖

#homelab #selfhosted #LLM #RAG #PolicyDrivenAI #SelfHostedAI #HomeLabLife #BSD #Linux #PangeaHillsAI #nerdlife

Tìm kiếm nền tảng AI tự host vì không muốn gửi dữ liệu đến API của bên thứ ba. Muốn kiểm soát toàn bộ, không cần dịch vụ cloud. Các nền tảng "riêng tư" thường là dịch vụ cloud được quản lý, không đáp ứng mục đích. #AITựHost #QuyềnRiêngTư #TriKem #SelfHostedAI #PrivacyMatters

https://www.reddit.com/r/selfhosted/comments/1ozhp2u/private_ai_inference_platform_2025_any_self/