I Deployed A Fully Local RAG Frontend (n8n + Ollama)
Автор: The AI Automators
Загружено: 2025-07-07
Просмотров: 25856
👉 Learn how to customize this system and build others like it, in our community https://www.theaiautomators.com/?utm_...
📦 GIT Repos
InsightsLM Local Package - https://github.com/theaiautomators/in...
InsightsLM Repo - https://github.com/theaiautomators/in...
Cole Medin's Local AI Package - https://github.com/coleam00/local-ai-...
More and more teams want to run AI agents entirely offline — for privacy, compliance, and control. In this video, I walk you through a fully local version of InsightsLM — a self-hosted NotebookLM-style app powered by Ollama, Supabase, and n8n — that you can run on your own hardware without relying on any cloud services.
This isn’t just a frontend wrapper. It’s a full RAG (Retrieval-Augmented Generation) system with:
📁 PDF/document ingestion + vector embeddings
🧠 Local LLM inference using Qwen3 8B via Ollama
🗣️ Local Whisper for audio transcription
🎙️ Coqui TTS for text-to-speech podcast generation
⚙️ All orchestration done with n8n — fully visual, no code
🔐 Zero cloud dependency — everything runs in Docker locally
The system includes re-architected workflows optimized for smaller local models, with fallback handling for hardware limitations (like 8GB VRAM). We also show how to adapt your prompts, workflows, and citations to make even smaller models useful for real-world Q&A tasks.
🙏 Huge shoutout to Cole Medden — this build is based on his phenomenal LocalAI starter repo.
💻 This project is perfect for developers, researchers, and privacy-focused teams who want full control over their data and AI stack.
🛠️ Tech Stack:
Docker (All services self-contained)
Supabase (Postgres, Auth, Edge Functions)
n8n (Orchestration & automation)
Ollama (Qwen 3B/8B LLMs)
Whisper ASR (Docker)
Coqui TTS (Single Speaker TTS)
📍Timestamps:
00:00 Local InsightsLM Demo
10:46 Step-by-Setup Guide
24:29 Verifying the Setup
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: