Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.knowledgestack.ai/llms.txt

Use this file to discover all available pages before exploring further.

  • Database Prewarming — How Knowledge Stack uses PostgreSQL cache prewarming to deliver fast query performance for document retrieval and vector search.
  • Real-Time Notifications — WebSocket and Server-Sent Events (SSE) APIs for receiving real-time updates and streaming AI responses.
  • Vector Search (Qdrant) — How the vector search infrastructure works, including embedding pipelines, metadata filtering, and multi-tenant isolation.
  • LLM Gateway (LiteLLM) — The LLM gateway that provides per-tenant cost tracking, budget enforcement, and model routing.
  • CI/CD Pipeline — Automated build, test, and deployment pipelines for self-hosted Knowledge Stack installations.