Add llm-wiki to Workflows and Innovations#75
Merged
logseq-cldwalker merged 1 commit intologseq:masterfrom Apr 27, 2026
Merged
Conversation
llm-wiki implements Karpathy's LLM Wiki pattern using Claude Code as the LLM brain and Logseq as the wiki UI. MIT licensed standalone repo at github.com/MehmetGoekce/llm-wiki.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds llm-wiki to the Workflows and Innovations section. It is a Logseq-based implementation of Andrej Karpathy's LLM Wiki gist — Claude Code maintains the graph for you (ingest sources, query, lint, status), with schema-driven consistency and a two-layer cache architecture.
Why it fits
The "Workflows and Innovations" section already lists tools that automate parts of Logseq maintenance (Lupin, logseq-doctor, Logseq Advanced Query Builder, etc.). llm-wiki extends this idea further: it lets an LLM maintain the entire knowledge graph — extracting facts from new sources, updating cross-references, enforcing the schema, and running health checks (orphans, stale pages, broken refs, credential leaks).
The Logseq outliner format is what makes the LLM use case work — every block is independently addressable, so an ingest can append new content without disrupting existing structure. This is genuinely Logseq-native; the same pattern is much harder in flat-markdown systems.
Project details
Test plan