Discussion: Future of Markdown Viewer Share Architecture (Client-Side vs Cloud Share) #98
Unanswered
ThisIs-Developer
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Discussion: Future of Markdown Viewer Share Architecture (Client-Side vs Cloud Share)
I have been doing a deep technical evaluation of the current share system used in my Markdown Viewer project and wanted community feedback before redesigning the architecture.
Current Share Architecture
Right now the application is fully client-side.
The share system works like this:
The entire markdown document is compressed and embedded directly inside the URL fragment.
Example:
Important details about the current implementation:
No backend
No database
No API server
No remote storage
No authentication
No server-side processing
Everything happens locally in the browser
This gives some major advantages:
Strong privacy
Fully offline-capable sharing
Zero infrastructure cost
No server maintenance
Easy static deployment
Truly client-side architecture
The Problem
As document size increases, URLs become extremely large.
Even with DEFLATE compression + Base64url encoding, there are hard limits caused by:
entropy/compression ceilings
Base64 overhead (~33%)
browser URL limits
clipboard limits
QR code limits
social sharing limitations
From testing:
Markdown Size | Approx URL Size -- | -- 5KB | ~1.6K chars 20KB | ~3.6K chars 40KB | ~6.3K chars 60KB | ~8.8K chars 95KB | ~13K+ charsLarge technical markdown documents (code blocks, Mermaid diagrams, tables, unique identifiers, etc.) hit these limits quickly.
The analysis suggests this is not really solvable through “better compression” because the issue is fundamentally architectural.
Compression helps, but document size is still coupled to URL size.
Proposed Architecture
I am considering a hybrid architecture using:
Cloudflare Pages
Cloudflare Workers
Cloudflare KV
The proposed flow would be:
Small Documents
Keep existing URL-based sharing:
Still:
local
offline
fully client-side
no upload
Large Documents
Use cloud-backed storage:
Flow:
Advantages:
tiny URLs
scalable sharing
QR support
social sharing support
practically unlimited document size
The Concern
This introduces an important architectural and privacy question.
If large documents are uploaded to KV storage, then:
document data leaves the browser
content is stored remotely
cloud infrastructure becomes part of the system
there is now some privacy/security surface area
Even if the editor itself remains frontend-first, it no longer feels technically correct to market the app as:
because cloud persistence now exists for large shares.
Possible Solution: End-to-End Encryption
One idea is:
Then:
KV stores only encrypted data
decryption key stays in URL fragment
server never sees plaintext
Worker never has access to document contents
Example:
This would preserve much stronger privacy guarantees while still solving the URL scaling problem.
Questions For The Community
I would really appreciate feedback from people experienced with:
frontend architecture
JAMstack applications (JavaScript, APIs, and Markup-based static web architectures)
cloud storage
privacy/security
edge computing
markdown/document platforms
Main questions:
Is moving from pure URL-based sharing to Workers + KV the correct architectural direction here?
Once remote KV storage exists, is it still fair to describe the project as “client-side” or “frontend-first”?
Would end-to-end encryption be enough to preserve the privacy-focused nature of the application?
Are there alternative architectures I should consider before implementing this?
For projects like this, where do you personally draw the line between:
“fully client-side”
“frontend-first”
“cloud-backed app”
Would really appreciate honest technical feedback before I move forward with the redesign.
Beta Was this translation helpful? Give feedback.
All reactions