Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 22 additions & 36 deletions e2e/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ npm run test:e2e

Runs: **build → pack → publint + attw → smoke test → cleanup**

| Step | What it does |
|------|-------------|
| `npm run build` | Compile TypeScript |
| `npm pack` | Create tarball from `files` field |
| `publint --strict` | Validate package.json exports, files, types |
| `attw` | Check TypeScript type resolution across all `moduleResolution` settings |
| `smoke.mjs` | 68 assertions exercising the public API |
| cleanup | Remove `.tgz`, `e2e/node_modules`, `e2e/package-lock.json` |
| Step | What it does |
| ------------------ | --------------------------------------------------------------------------------------- |
| `npm run build` | Compile TypeScript |
| `npm pack` | Create tarball from `files` field |
| `publint --strict` | Validate package.json exports, files, types |
| `attw` | Check TypeScript type resolution across all `moduleResolution` settings |
| `smoke.mjs` | 41 tests / 74 assertions exercising the public API (`node:test` + `node:assert/strict`) |
| cleanup | Remove `.tgz`, `e2e/node_modules`, `e2e/package-lock.json` |

Cleanup always runs, even on failure. The exit code from the smoke test is preserved.

Expand All @@ -32,31 +32,17 @@ npm run test:e2e:published

## What the smoke test covers

| # | Area | What's tested |
|---|------|---------------|
| 1 | Basic compress | ratio, token_ratio, message count, verbatim store |
| 2 | Uncompress round-trip | lossless content restoration |
| 3 | Dedup | exact duplicate detection (>=200 char messages) |
| 4 | Token budget (fit) | binary search finds a recencyWindow that fits |
| 5 | Token budget (tight) | correctly reports `fits: false` when impossible |
| 6 | defaultTokenCounter | returns positive number |
| 7 | Preserve keywords | keywords retained in compressed output |
| 8 | sourceVersion | flows into compression metadata |
| 9 | embedSummaryId | summary_id embedded in compressed content |
| 10 | Factory functions | createSummarizer, createEscalatingSummarizer exported |
| 11 | forceConverge | best-effort truncation, no regression |
| 12 | Fuzzy dedup | runs without errors, message count preserved |
| 13 | Provenance metadata | _cce_original structure (ids, summary_id, version) |
| 14 | Missing verbatim store | missing_ids reported correctly |
| 15 | Custom tokenCounter | invoked and used for ratio calculation |
| 16 | Edge cases | empty input, single message |
| 17 | Async path (mock summarizer) | compress returns Promise, summarizer called, round-trip works |
| 18 | Async + token budget | async binary search produces fits/tokenCount/recencyWindow |
| 19 | System role | system messages auto-preserved, never compressed |
| 20 | tool_calls | messages with tool_calls pass through intact |
| 21 | Re-compression | compress already-compressed output, recover via chained stores |
| 22 | Recursive uncompress | nested provenance fully expanded |
| 23 | minRecencyWindow | floor enforced during budget binary search |
| 24 | Large conversation (31 msgs) | compression + lossless round-trip at scale |
| 25 | Large conversation + budget | binary search converges on 50% budget target |
| 26 | Verbatim store as object | uncompress accepts plain Record, not just function |
| Area | What's tested |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| **Basic compression** | ratio, token_ratio, message count, verbatim store, preserve keywords, sourceVersion, embedSummaryId, forceConverge, provenance metadata |
| **Uncompress round-trip** | lossless content restoration, missing verbatim store, plain object store |
| **Dedup** | exact duplicate detection (>=200 char), fuzzy dedup detects near-duplicates |
| **Token budget** | binary search fit, impossible budget (fits=false), minRecencyWindow floor |
| **Token counter** | defaultTokenCounter, custom tokenCounter |
| **Factory functions** | createSummarizer, createEscalatingSummarizer exported |
| **Edge cases** | empty input, single message |
| **Async path** | mock summarizer + round-trip, async + token budget |
| **Role handling** | system messages auto-preserved, tool_calls pass through + other messages compressed |
| **Re-compression** | compress already-compressed output + chained stores, recursive uncompress |
| **Large conversation** | 31-message fixture, compression + round-trip, 50% budget target |
| **Error handling** | TypeError on non-array compress, null entry, missing id, non-array uncompress, invalid store; graceful handling of null/empty content |
2 changes: 1 addition & 1 deletion e2e/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@
"type": "module",
"description": "End-to-end smoke test — installs context-compression-engine from npm and exercises the public API as a real consumer would.",
"scripts": {
"test": "node smoke.mjs"
"test": "node --test smoke.mjs"
}
}
Loading