Per-level output files for --detail and --compact#114
Conversation
Preparation for per-level output files. Migration 003's UNIQUE(project_id, page_number) let a second-variant render clobber the first's cache rows via ON CONFLICT — staleness lookups then crossed variants and orphan-cleanup counts lost track of per-variant pages. Migration 004 recreates html_pages with a variant_suffix column (NOT NULL DEFAULT '') and a UNIQUE(project_id, variant_suffix, page_number) constraint. Existing rows land at variant_suffix='' (the full/default variant) so every upgrade path preserves its cache. The `id` column is preserved across the table recreate so page_sessions' foreign key stays intact. Extend the page-cache API with variant_suffix kwargs (default ''): - get_page_data / update_page_cache / is_page_stale / get_page_count now accept variant_suffix; every existing caller keeps working because '' is the default. Callers from the per-level feature will pass the variant. - invalidate_all_pages and get_all_pages stay project-scoped — page-size changes invalidate across variants (shared config). No caller changes yet — this commit is the self-consistent schema + API floor for subsequent commits to build on. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`variant_suffix(detail, compact, format)` returns the dot-prefixed
infix that encodes a render variant into an output filename. Default
(full + no compact) returns the empty string so existing
`combined_transcripts.html` / `session-{id}.html` paths are unchanged.
`--compact` only contributes to the suffix for Markdown output; for
HTML it silently has no effect on the filename, matching the CLI's
existing Markdown-only semantics for that flag.
`_VARIANT_ENTRY_RE` matches page-1 entry points for a project
(`combined_transcripts.html`, `combined_transcripts.low.html`,
`combined_transcripts.low.compact.html`) and rejects paginated
trailers (`combined_transcripts_2.html`, `combined_transcripts.low_2.html`)
and empty-segment malformations (`combined_transcripts..html`).
Captures the suffix for variant-label reverse lookup in the index.
No callers yet — this commit is the self-contained primitive for
subsequent commits to use.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wire detail+compact into every output-path producer in converter.py:
- `_get_page_html_path(n, variant_suffix="")` appends the suffix before
the page number, keeping `combined_transcripts.html` / `_2.html`
stable when variant is empty and producing
`combined_transcripts.low.html` / `.low_2.html` when it isn't.
- `_enable_next_link_on_previous_page` takes the same suffix so it
edits the right variant's file.
- `_generate_paginated_html` computes the suffix once and threads it
through orphan cleanup (`get_page_count(suffix)`), page-staleness
checks (`is_page_stale(page, size, suffix)`), nav links
(`_get_page_html_path(n, suffix)`) and cache writes
(`update_page_cache(..., variant_suffix=suffix)`).
- `convert_jsonl_to` computes the suffix up-front and uses it for the
default `combined_transcripts{suffix}.{ext}` and
`input_path.with_suffix(f"{suffix}.{ext}")` output paths.
- `_generate_individual_session_files` uses
`session-{id}{suffix}.{ext}`.
- `generate_single_session_file` honours the `--session-id` export
filename similarly; when the CLI passes an explicit `-o` path, the
user's literal path wins (no suffix appended).
Drop the stale force-regen clauses (`or detail != FULL or compact`) in
the non-paginated staleness check: they existed because every non-default
variant used to overwrite `combined_transcripts.html`. With filenames
now encoding the variant, each variant's cache row is authoritative and
the clauses would defeat caching.
`existing_page_count` call in `convert_jsonl_to` is now variant-scoped
via `get_page_count(suffix)`. The index-level `existing_page_count` at
converter.py:2133 stays on the default variant for now — index
enumeration will be extended in a later commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two small CLI-side follow-ups to the per-level output-file rollout: - Clarify in `--compact` help text that the flag is Markdown-only (variant_suffix() already enforces this, but the help string still read like it applied to all formats). Monk flagged the need for a regression guard on the note, which the test matrix covers. - Add `update_cache: bool = True` to `convert_jsonl_to`. When the CLI receives `-o <path>`, the user's path wins: we pass `update_cache=False` so the one-off export doesn't occupy an `html_cache` slot keyed by the arbitrary destination. Other callers (TUI, batch index generation, `convert_jsonl_to_html` convenience wrapper) keep the default True. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`generate_session` on both renderers built `combined_transcript_link` as the bare default (`combined_transcripts.html` / `.md`), so a `session-abc.low.html` page's top-of-page back-link would send the reader to the full-detail combined transcript instead of the low variant's. The back-link must stay within the same variant the session is being rendered at — otherwise the user bounces between detail levels when navigating. Both renderers already hold their variant settings on `self.detail` and `self.compact` (set by `get_renderer`), so the fix is a local call to `variant_suffix(...)` and an f-string. Kept the existing cache-manager guard so sessions rendered without a project cache still produce no back-link. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a project has multiple per-variant combined transcript files on
disk, the index card now lists every entry point rather than only
linking to the default. Each project_summaries entry gains an
`html_variants` list of `{file, label, suffix}` dicts, produced by
`_enumerate_project_variants` which globs `combined_transcripts*.html`
against `_VARIANT_ENTRY_RE` (so paginated trailers like
`combined_transcripts_2.html` are excluded). The default (empty
suffix) sorts first, other variants alphabetical.
The index template renders a small "Variants:" row with labelled
links — only when a project has >1 variant, so single-variant projects
look unchanged. Styling lives alongside the existing project-card CSS.
Both live (cached and fallback) and archived project branches populate
`html_variants`, so archived projects that accumulated variants before
archival are enumerated correctly.
Snapshot updated for the new template output (the test fixture's
project has a single variant, so only the CSS addition shows up in the
diff).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
New `test/test_output_paths.py` (34 tests):
- **variant_suffix matrix** — default empty, detail-only, md+compact
combinations, html compact no-op, string-detail convenience.
- **_VARIANT_ENTRY_RE** — accepts legitimate entry-point filenames
(full / low / high / low.compact / minimal), rejects paginated
trailers (`_2.html`), malformed empties (`..html`), and non-matching
basenames.
- **_get_page_html_path** — composition for default, page 2, variant
page 1/2, and chained (`.low.compact_3.html`).
- **Converter integration**:
- Default variant still lands at `combined_transcripts.html`.
- Low variant lands at `combined_transcripts.low.html`.
- Full and low coexist as distinct files from the same project.
- `.md --compact --detail low` produces
`combined_transcripts.low.compact.md`.
- Individual session files inherit the variant suffix.
- Explicit `-o custom.html` wins, suffix NOT appended.
- `--session-id` export adopts the suffix.
- Single-file input mode composes variant suffix into
`<stem>.<variant>.html`.
- **Cache coexistence** (the blocker-catching test):
- Full→low→full renders — second full is a cache hit (mtime
unchanged), low does not clobber full's file.
- **Session → combined back-link** correctly points at the matching
variant's combined file.
- **_enumerate_project_variants** lists all entries, default first,
paginated trailers excluded.
- **CLI help regression** asserts `--compact` help text carries the
"Markdown-only" note so a future edit doesn't silently drop it.
- **Cache API**: `get_page_data` / `get_page_count` are variant-scoped;
two rows with the same page_number but different variant_suffix
coexist and return their own data.
Full suite: 931 passed (+34). No snapshot updates — new tests work at
the path/cache level, not the rendered-HTML level.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Monk's review of alice-per-level-outputs (#2159) flagged three items: **BLOCKER — migration 004 cascade-deletes page_sessions**. The migration runs with `PRAGMA foreign_keys = ON` (runner.py:145) and `page_sessions.page_id` is a FK to `html_pages.id` with `ON DELETE CASCADE`. `DROP TABLE html_pages` cascades and wipes every page_sessions row before the RENAME restores the target — so any pre-existing paginated cache silently loses all page→session mappings on first post-upgrade run. Wrap the recreate with `PRAGMA foreign_keys = OFF;` / `PRAGMA foreign_key_check; PRAGMA foreign_keys = ON;` per SQLite's 12-step ALTER TABLE recipe. A companion test pre-populates a row in each of `projects`, `html_pages`, `page_sessions`, applies migration 004 via the real runner path, and asserts all three survive. Verified the test correctly fails when the PRAGMA toggle is removed. **MINOR 1 — 2 new pyright errors** flagged that contradicted my "unchanged from baseline" note: - `utils.py:59`: `isinstance(detail, str) and not isinstance(detail, DetailLevel)` is always-True in the first half because `DetailLevel` inherits `str`. Narrow to just `if not isinstance(detail, DetailLevel)` — same semantics, no unnecessary-isinstance warning. - `converter.py:741`: `_VARIANT_ENTRY_RE` used across modules despite its private name. Rename to `VARIANT_ENTRY_RE` — it's part of the module's public API now. **MINOR 2 — paginated × variant integration test missing**. The existing coexistence test on 4-message fixtures never triggers the paginated code path. Added `TestPaginatedVariantCoexistence` that forces pagination with `page_size=2` against 2 × 6-message sessions and exercises FULL → LOW → FULL, asserting: - Both variants' page files (page 1 and page 2) coexist on disk. - LOW's render does not touch any FULL page file. - Second FULL render is a cache hit (page 1 and page 2 mtimes unchanged) — this is the test that justifies commit 3's variant-scoped `get_page_count` / `is_page_stale` / `update_page_cache` threading. All 933 unit tests pass. pyright clean. Monk's earlier non-blocking observations (`_variant_label_from_suffix` casing, TUI-scope) unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughVariant-aware HTML pagination and caching were implemented by threading a computed Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant Converter
participant Renderer
participant CacheManager
participant DB
CLI->>Converter: convert_jsonl_to(detail, compact, update_cache)
Converter->>Renderer: render sessions (compute variant_suffix)
Renderer-->>Converter: session HTML + combined filenames with suffix
Converter->>CacheManager: is_page_stale(page_number, page_size, variant_suffix)
CacheManager->>DB: SELECT html_pages WHERE project_id, variant_suffix, page_number
DB-->>CacheManager: page row / no row
CacheManager-->>Converter: stale? / cached page data
alt needs regeneration and update_cache
Converter->>CacheManager: update_page_cache(pages..., variant_suffix)
CacheManager->>DB: INSERT/UPDATE html_pages (ON CONFLICT project_id,variant_suffix,page_number)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
claude_code_log/converter.py (4)
2173-2198:⚠️ Potential issue | 🟠 MajorUse the requested variant in the all-projects fast path.
This path still checks
session-{id}.html,combined_transcripts.html,get_page_count()andis_page_stale(...)without the active suffix. If default full-detail output is current,--all-projects --detail lowcan concludeneeds_work == Falseand never generatecombined_transcripts.low.html.🛠️ Proposed direction
+ from .utils import variant_suffix as _variant_suffix + + ext = get_file_extension(output_format) + suffix = _variant_suffix(detail, compact, output_format) @@ - stale_sessions = ( - cache_manager.get_stale_sessions(valid_session_ids) + stale_sessions = ( + cache_manager.get_stale_sessions(valid_session_ids, suffix) if cache_manager else [] ) @@ - output_path = project_dir / "combined_transcripts.html" + output_path = project_dir / f"combined_transcripts{suffix}.{ext}" @@ - existing_page_count = cache_manager.get_page_count() + existing_page_count = cache_manager.get_page_count(suffix) if existing_page_count > 0: # Paginated project: check page 1 staleness - combined_stale = cache_manager.is_page_stale(1, page_size)[0] + combined_stale = cache_manager.is_page_stale(1, page_size, suffix)[0]This requires extending
CacheManager.get_stale_sessions(...)to buildsession-{id}{variant_suffix}.html.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude_code_log/converter.py` around lines 2173 - 2198, The all-projects fast path must honor the requested output variant (e.g., ".low") when checking caches: update the logic around cache_manager.get_stale_sessions, cache_manager.get_archived_session_count, and the combined output checks so they operate on session-{id}{variant_suffix}.html and combined_transcripts{variant_suffix}.html rather than the un-suffixed names; specifically extend CacheManager.get_stale_sessions to accept/construct session-{id}{variant_suffix}.html (and make get_archived_session_count use the same variant), and when checking the combined file use output_path.name + variant_suffix for cache_manager.is_html_stale(...) (keep using is_page_stale(1, page_size) for paginated projects but ensure any html cache checks use the active variant suffix).
977-984:⚠️ Potential issue | 🟠 MajorScope page-size invalidation to the active variant.
This still reads one arbitrary page-size config and calls
invalidate_all_pages(), so rendering.lowwith a different--page-sizecan delete full-detail page cache rows/files. That breaks the per-variant cache isolation this PR is adding.🛠️ Proposed direction
- cached_page_size = cache_manager.get_page_size_config() + cached_page_size = cache_manager.get_page_size_config(suffix) @@ - old_paths = cache_manager.invalidate_all_pages() + old_paths = cache_manager.invalidate_all_pages(suffix)This also needs the matching cache methods to filter
html_pagesbyvariant_suffix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude_code_log/converter.py` around lines 977 - 984, The current logic reads a single cached_page_size via cache_manager.get_page_size_config() and calls cache_manager.invalidate_all_pages(), which removes caches across all variants; change this to only compare and invalidate the active variant by using the active variant identifier (e.g., variant_suffix or the active variant variable) when reading and comparing page sizes (use something like cache_manager.get_page_size_config(variant_suffix)) and replace invalidate_all_pages() with a variant-scoped invalidation method (e.g., cache_manager.invalidate_pages_for_variant(variant_suffix) or invalidate_pages(filter_by_variant=variant_suffix)); also update the cache backend methods that enumerate/filter html_pages to include variant_suffix in their filtering so only rows/files for that variant are affected.
2497-2507:⚠️ Potential issue | 🟠 MajorRegenerate the index when variant outputs change.
The new
html_variantsdata is only visible ifindex.htmlis regenerated, but the current condition ignores newly generated variant files when the cache itself was unchanged. A run that creates.lowpages can leave an otherwise-current index without the new “Variants” row.🛠️ Proposed direction
- any_cache_updated = False # Track if any project had cache updates + any_cache_updated = False # Track if any project had cache updates + any_outputs_regenerated = False @@ if not needs_work: @@ print( f" {project_dir.name}: cached{archived_suffix} ({stats.total_time:.1f}s)" ) else: + any_outputs_regenerated = True # Slow path: update cache and regenerate output @@ - if renderer.is_outdated(index_path) or from_date or to_date or any_cache_updated: + if ( + renderer.is_outdated(index_path) + or from_date + or to_date + or any_cache_updated + or any_outputs_regenerated + ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude_code_log/converter.py` around lines 2497 - 2507, The index can remain stale when new variant pages are created because the regeneration condition only checks renderer.is_outdated(index_path), from_date/to_date, or any_cache_updated; update that condition to also detect changes in per-project variant outputs and force regeneration. Add a check that queries the renderer for variant output staleness (e.g., a new method like renderer.variants_outdated(projects_path) or a helper that compares variant file timestamps) and include it in the if clause that writes index_path and calls renderer.generate_projects_index(project_summaries, from_date, to_date) so the index is rewritten whenever variant files (html_variants) are newly created or updated.
1187-1360:⚠️ Potential issue | 🟠 MajorHonor explicit
-obefore entering paginated generation.
update_cache=Falseskips the combinedhtml_cachewrite, but a large directory can still enter_generate_paginated_html(...), ignore the user’s customoutput_path, writecombined_transcripts{suffix}*.htmlin the project, and update page cache rows. Explicit-oshould force the one-off single-file path.🛠️ Proposed guard
def convert_jsonl_to( @@ ) -> Path: @@ - if not input_path.exists(): + explicit_output_path = output_path is not None + + if not input_path.exists(): raise FileNotFoundError(f"Input path not found: {input_path}") @@ if ( format == "html" and cache_manager is not None and input_path.is_dir() + and not explicit_output_path and from_date is None and to_date is None ):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude_code_log/converter.py` around lines 1187 - 1360, Capture whether the user passed an explicit output_path at function entry (e.g., set explicit_output = output_path is not None before any code that assigns defaults), then prevent entering paginated generation when the caller either disabled cache updates or provided an explicit output path by adding explicit_output and update_cache checks to the pagination decision (the use_pagination calculation and the conditional that leads to calling _generate_paginated_html). Ensure the new condition combines with the existing format/cache/input_path/date checks so _generate_paginated_html is only reached when update_cache is True and no explicit output_path was supplied.
🧹 Nitpick comments (5)
test/__snapshots__/test_snapshot_html.ambr (1)
623-626: Increase contrast for variant hint text.At Line 624 and Line 625,
font-size: 0.75emwithcolor:#888`` is likely too low-contrast for normal text on a white background. Please bump contrast (and ideally avoid very small text) to meet accessibility expectations.Also applies to: 628-630
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/__snapshots__/test_snapshot_html.ambr` around lines 623 - 626, The variant hint CSS currently uses "font-size: 0.75em" and "color: `#888`", which is too small and low-contrast; update the rule that contains these properties to increase readability by raising the font-size (e.g., to ~0.875em or 0.9em) and using a darker color (e.g., `#555` or darker, or a WCAG-compliant contrast color) so the hint text meets accessibility contrast guidelines.claude_code_log/html/templates/components/project_card_styles.css (1)
140-162: Prefer existing color tokens for theme consistency.These new hard-coded grays bypass the variables used by the rest of the card, which can make variant pills inconsistent in alternate themes.
🎨 Proposed token-based cleanup
.project-variants { margin-top: 4px; font-size: 0.75em; - color: `#888`; + color: var(--text-muted); } @@ .project-variants .variant-link { @@ - border: 1px solid `#ccc`; + border: 1px solid var(--text-muted); border-radius: 3px; - color: `#555`; + color: var(--text-muted); text-decoration: none; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude_code_log/html/templates/components/project_card_styles.css` around lines 140 - 162, The .project-variants styles use hard-coded grays (`#888`, `#ccc`, `#555`, `#f0f0f0`) which break theme consistency; update the rules for .project-variants, .project-variants .variant-hint, .project-variants .variant-link and .project-variants .variant-link:hover to use the existing design tokens/CSS variables used elsewhere in the card (e.g. --color-muted, --border-color, --text-secondary, --bg-hover or your project's equivalent token names) instead of literal hex values so the variant pills follow dark/light theme changes.test/test_output_paths.py (3)
461-541: Migration test reaches into a private helper — wrap the concern.
_ensure_schema_version_tableis an underscore-prefixed internal ofmigrations.runner; importing it in tests couples the test to an implementation detail and will break silently if the runner's internal API shifts. If there is a publicrun_migrations(conn, up_to=N)(or similar) it should be used here instead. If none exists, consider adding a tiny public test helper inmigrations.runner(e.g.,apply_migrations_up_to(conn, n)) so this scenario — "seed at version k, then step to k+1" — is expressible without private imports.Not a blocker; the test is valuable as-is.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 461 - 541, The test imports the private helper _ensure_schema_version_table from migrations.runner which couples the test to an internal API; change the test to use a public migration helper instead (either an existing run_migrations or apply_migration_batch if present) or add a tiny public helper in migrations.runner (e.g., apply_migrations_up_to(conn, n) or ensure_schema_and_apply(conn, n)) that wraps _ensure_schema_version_table + applying migrations, then update the test to call that public function and remove the direct import of _ensure_schema_version_table.
386-398: Hardcodinguv runcouples the test to a specific dev workflow.Not every contributor (or CI job) installs
uv;pip install -e .+ plainpytestis enough to run the other tests in this file, but this one will fail withFileNotFoundError. Prefer invoking the CLI through the installed entry-point module or Click's in-process runner so the test has no external toolchain requirement.♻️ Two equivalent, more portable forms
- result = subprocess.run( - ["uv", "run", "claude-code-log", "--help"], - capture_output=True, - text=True, - check=True, - ) + import sys + result = subprocess.run( + [sys.executable, "-m", "claude_code_log", "--help"], + capture_output=True, + text=True, + check=True, + )Or, if the Click command object is importable, use
click.testing.CliRunner— it avoids spawning a subprocess entirely and is the standard for Click help-text regression tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 386 - 398, The test test_compact_help_notes_markdown_only currently spawns a subprocess with ["uv", "run", "claude-code-log", "--help"], which hardcodes the uv tool and can cause FileNotFoundError; change it to a portable invocation by either (A) calling the package entry-point via the current Python interpreter (use subprocess.run([sys.executable, "-m", "claude_code_log", "--help"], ...)) or (B) preferably import the Click command object and use click.testing.CliRunner.invoke(...) against the command (e.g., import the click command from claude_code_log.cli or the module that exposes the CLI and call CliRunner().invoke(cli, ["--help"])) so the test no longer depends on an external uv binary and still asserts the same help text.
367-374: Assert the exact variant order instead of a sorted set.The test checks "default first" but then discards order information by sorting. The production code explicitly guarantees default (empty suffix) first, then remaining variants alphabetically. The test should verify this exact order to catch regressions in the UI's variant ordering:
Suggested change
- assert sorted(suffixes) == ["", ".high", ".low"] + assert suffixes == ["", ".high", ".low"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 367 - 374, The test currently verifies presence of suffixes but loses ordering by using sorted(); update the assertions to check the exact order guaranteed by _enumerate_project_variants: ensure suffixes equals ["", ".high", ".low"] and labels correspondingly equals ["Full", "High", "Low"] (or the expected label ordering produced by the function) so the test asserts default (empty suffix) first and the remaining variants in alphabetical order instead of a sorted set.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/test_output_paths.py`:
- Around line 315-319: The cache-hit assertion is fragile due to filesystem
mtime resolution; change the test around convert_jsonl_to and the full2/stat()
check to compare full2.stat().st_mtime_ns (and optionally st_ino) against the
original full_mtime_ns (compute full_mtime_ns from full.stat().st_mtime_ns
earlier) instead of st_mtime, or alternatively assert a deterministic content
marker/version written by convert_jsonl_to to prove the file was unchanged;
update references to full_mtime to full_mtime_ns and use full2.stat().st_ino if
you want inode stability verification.
- Around line 549-598: The test's comment claiming "DAG ordering" is
non-deterministic should be removed because pagination in
TestPaginatedVariantCoexistence is deterministic; update the docstring/comment
around the convert_jsonl_to setup to assert deterministic pagination instead of
"may or may not split". Replace fragile mtime-based cache assertions that use
full_page1_mtime/full_page2_mtime and the second FULL render checks with a
direct cache verification: after calling convert_jsonl_to (the cache-backed
renderer), assert cache metadata or checksum entries for the generated pages did
not change (or compare file checksums) rather than relying on filesystem mtimes;
specifically update the assertions around full_page1.stat().st_mtime and
full_page2.stat().st_mtime and the final full_page1_again == full_page1 check to
validate cache state via the renderer's cache lookup or file hash comparison.
Ensure references to convert_jsonl_to, combined_transcripts.html/low variants,
and the TestPaginatedVariantCoexistence test are used to locate the code to
modify.
---
Outside diff comments:
In `@claude_code_log/converter.py`:
- Around line 2173-2198: The all-projects fast path must honor the requested
output variant (e.g., ".low") when checking caches: update the logic around
cache_manager.get_stale_sessions, cache_manager.get_archived_session_count, and
the combined output checks so they operate on session-{id}{variant_suffix}.html
and combined_transcripts{variant_suffix}.html rather than the un-suffixed names;
specifically extend CacheManager.get_stale_sessions to accept/construct
session-{id}{variant_suffix}.html (and make get_archived_session_count use the
same variant), and when checking the combined file use output_path.name +
variant_suffix for cache_manager.is_html_stale(...) (keep using is_page_stale(1,
page_size) for paginated projects but ensure any html cache checks use the
active variant suffix).
- Around line 977-984: The current logic reads a single cached_page_size via
cache_manager.get_page_size_config() and calls
cache_manager.invalidate_all_pages(), which removes caches across all variants;
change this to only compare and invalidate the active variant by using the
active variant identifier (e.g., variant_suffix or the active variant variable)
when reading and comparing page sizes (use something like
cache_manager.get_page_size_config(variant_suffix)) and replace
invalidate_all_pages() with a variant-scoped invalidation method (e.g.,
cache_manager.invalidate_pages_for_variant(variant_suffix) or
invalidate_pages(filter_by_variant=variant_suffix)); also update the cache
backend methods that enumerate/filter html_pages to include variant_suffix in
their filtering so only rows/files for that variant are affected.
- Around line 2497-2507: The index can remain stale when new variant pages are
created because the regeneration condition only checks
renderer.is_outdated(index_path), from_date/to_date, or any_cache_updated;
update that condition to also detect changes in per-project variant outputs and
force regeneration. Add a check that queries the renderer for variant output
staleness (e.g., a new method like renderer.variants_outdated(projects_path) or
a helper that compares variant file timestamps) and include it in the if clause
that writes index_path and calls
renderer.generate_projects_index(project_summaries, from_date, to_date) so the
index is rewritten whenever variant files (html_variants) are newly created or
updated.
- Around line 1187-1360: Capture whether the user passed an explicit output_path
at function entry (e.g., set explicit_output = output_path is not None before
any code that assigns defaults), then prevent entering paginated generation when
the caller either disabled cache updates or provided an explicit output path by
adding explicit_output and update_cache checks to the pagination decision (the
use_pagination calculation and the conditional that leads to calling
_generate_paginated_html). Ensure the new condition combines with the existing
format/cache/input_path/date checks so _generate_paginated_html is only reached
when update_cache is True and no explicit output_path was supplied.
---
Nitpick comments:
In `@claude_code_log/html/templates/components/project_card_styles.css`:
- Around line 140-162: The .project-variants styles use hard-coded grays (`#888`,
`#ccc`, `#555`, `#f0f0f0`) which break theme consistency; update the rules for
.project-variants, .project-variants .variant-hint, .project-variants
.variant-link and .project-variants .variant-link:hover to use the existing
design tokens/CSS variables used elsewhere in the card (e.g. --color-muted,
--border-color, --text-secondary, --bg-hover or your project's equivalent token
names) instead of literal hex values so the variant pills follow dark/light
theme changes.
In `@test/__snapshots__/test_snapshot_html.ambr`:
- Around line 623-626: The variant hint CSS currently uses "font-size: 0.75em"
and "color: `#888`", which is too small and low-contrast; update the rule that
contains these properties to increase readability by raising the font-size
(e.g., to ~0.875em or 0.9em) and using a darker color (e.g., `#555` or darker, or
a WCAG-compliant contrast color) so the hint text meets accessibility contrast
guidelines.
In `@test/test_output_paths.py`:
- Around line 461-541: The test imports the private helper
_ensure_schema_version_table from migrations.runner which couples the test to an
internal API; change the test to use a public migration helper instead (either
an existing run_migrations or apply_migration_batch if present) or add a tiny
public helper in migrations.runner (e.g., apply_migrations_up_to(conn, n) or
ensure_schema_and_apply(conn, n)) that wraps _ensure_schema_version_table +
applying migrations, then update the test to call that public function and
remove the direct import of _ensure_schema_version_table.
- Around line 386-398: The test test_compact_help_notes_markdown_only currently
spawns a subprocess with ["uv", "run", "claude-code-log", "--help"], which
hardcodes the uv tool and can cause FileNotFoundError; change it to a portable
invocation by either (A) calling the package entry-point via the current Python
interpreter (use subprocess.run([sys.executable, "-m", "claude_code_log",
"--help"], ...)) or (B) preferably import the Click command object and use
click.testing.CliRunner.invoke(...) against the command (e.g., import the click
command from claude_code_log.cli or the module that exposes the CLI and call
CliRunner().invoke(cli, ["--help"])) so the test no longer depends on an
external uv binary and still asserts the same help text.
- Around line 367-374: The test currently verifies presence of suffixes but
loses ordering by using sorted(); update the assertions to check the exact order
guaranteed by _enumerate_project_variants: ensure suffixes equals ["", ".high",
".low"] and labels correspondingly equals ["Full", "High", "Low"] (or the
expected label ordering produced by the function) so the test asserts default
(empty suffix) first and the remaining variants in alphabetical order instead of
a sorted set.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 502536da-9b3b-4993-a53f-87b700eee306
📒 Files selected for processing (11)
claude_code_log/cache.pyclaude_code_log/cli.pyclaude_code_log/converter.pyclaude_code_log/html/renderer.pyclaude_code_log/html/templates/components/project_card_styles.cssclaude_code_log/html/templates/index.htmlclaude_code_log/markdown/renderer.pyclaude_code_log/migrations/004_html_pagination_variant.sqlclaude_code_log/utils.pytest/__snapshots__/test_snapshot_html.ambrtest/test_output_paths.py
PR #114 CI on Windows fails with a `UnicodeDecodeError` on cp1252 because `Path.read_text()` picks up the platform default encoding rather than utf-8. The HTML under test contains emoji glyphs (🤷 User, 🤖 Assistant, 📦 Conversation compacted, etc.), which cp1252 can't decode. Other read/write calls in this file operate on plain ASCII fixtures so they don't trip on Windows. The generated HTML is the only case. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/test_output_paths.py (1)
258-270: Add coverage that explicit exports do not claim cache rows.This verifies the literal output path, but the PR guarantee also says explicit
-orenders must not write page-cache rows. A regression inupdate_cache=(output is None)could still pass this test if the file lands atcustom.html.Consider asserting via
CacheManageror direct SQLite inspection that nohtml_pagesrow is created for either the default or.lowvariant after this render.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 258 - 270, Add an assertion to test_explicit_output_path_honoured to ensure explicit -o exports do not write page-cache rows: after calling convert_jsonl_to(...) with explicit path, open the CacheManager (or query the sqlite html_pages table) and assert there are no rows for the session/name and no variant rows (e.g., ".low" or default html) created; reference convert_jsonl_to and CacheManager/html_pages to locate the caching behavior and ensure update_cache=(output is None) regression is caught by checking absence of any html_pages entries for that session.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/test_output_paths.py`:
- Around line 258-270: Add an assertion to test_explicit_output_path_honoured to
ensure explicit -o exports do not write page-cache rows: after calling
convert_jsonl_to(...) with explicit path, open the CacheManager (or query the
sqlite html_pages table) and assert there are no rows for the session/name and
no variant rows (e.g., ".low" or default html) created; reference
convert_jsonl_to and CacheManager/html_pages to locate the caching behavior and
ensure update_cache=(output is None) regression is caught by checking absence of
any html_pages entries for that session.
|
Oh nice, this is great, I didn't think of this when looking at the previous PR! |
CodeRabbit review flagged st_mtime equality as filesystem-resolution sensitive. While GitHub Actions runners (NTFS, ext4, APFS) all have sub-second granularity and the test isn't flaky in practice on any target CI, st_mtime_ns + inode is strictly better with zero downside: - st_mtime_ns gives nanosecond resolution on the three major FSes. - inode shifts under delete+recreate even when mtime happens to match, catching a rewrite-in-place-preserving-mtime edge case. Also drop the inaccurate "pagination may or may not split depending on DAG ordering" comment — `_assign_sessions_to_pages` sorts by `first_timestamp` with Python's stable sort, so `page_size=2` against 2 × 6-message sessions produces exactly 2 pages deterministically. Production cache invalidation (`cache.py::is_file_cached`) uses `abs(source_mtime - cached_mtime) < 1.0` to tolerate filesystem mtime granularity and Python↔OS clock skew — that 1-second tolerance is a deliberate design choice and stays. This commit only touches the test. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/test_output_paths.py (2)
395-408: CLI help test hard-codesuv run, which may not be present in all test environments.Invoking
uv run claude-code-logmakes this test dependent onuvbeing installed on PATH in CI/dev sandboxes. If the package is already installed in the active environment,sys.executable -m claude_code_log(or the installed console script directly) is more portable and avoids a second resolution layer.♻️ Suggested alternative
- result = subprocess.run( - ["uv", "run", "claude-code-log", "--help"], - capture_output=True, - text=True, - check=True, - ) + import sys + result = subprocess.run( + [sys.executable, "-m", "claude_code_log", "--help"], + capture_output=True, + text=True, + check=True, + )Requires
claude_code_log/__main__.pyto exist; otherwise keepuvbut consider guarding withpytest.importorskip/shutil.which("uv").🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 395 - 408, The test TestCliHelpText.test_compact_help_notes_markdown_only hard-codes the external "uv run" invocation, making it unreliable if uv isn't on PATH; update the subprocess.run call in that test to use the current Python interpreter module invocation (e.g., replace ["uv", "run", "claude-code-log", "--help"] with [sys.executable, "-m", "claude_code_log", "--help"] and add import sys at the top), or alternatively add a guard using shutil.which("uv") or pytest.importorskip("uv") to skip when uv is unavailable; change the command in the subprocess.run call and add the corresponding import/guard inside the test.
471-551: Migration test reaches into a private helper (_ensure_schema_version_table).Importing
_ensure_schema_version_table(leading underscore) couples the test to a private API ofmigrations.runner. If there's a public entry point that applies migrations up to a target version, prefer it — otherwise this test will need updating whenever the runner internals change. Not blocking, but worth revisiting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 471 - 551, The test imports and calls the private helper _ensure_schema_version_table from migrations.runner; change the test to use a public API instead by removing the _ensure_schema_version_table import and either (a) call an existing public migration entrypoint that initializes the schema (use apply_migration to bring DB to a baseline if such an entrypoint exists) or (b) if no public initializer exists, add a non-underscored helper ensure_schema_version_table in migrations.runner and call that from the test; update the test to reference the public symbol ensure_schema_version_table or rely solely on apply_migration(conn, ...) to set up the schema rather than touching the private _ensure_schema_version_table.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/test_output_paths.py`:
- Around line 154-175: The fixture function _make_assistant incorrectly sets
"userType" to "human"; change it to the correct value for assistant entries
(e.g., "assistant") so assistant messages reflect their actual type. Locate the
_make_assistant function and update the "userType" field from "human" to the
appropriate assistant identifier to avoid misleading behavior in tests that
branch on userType.
---
Nitpick comments:
In `@test/test_output_paths.py`:
- Around line 395-408: The test
TestCliHelpText.test_compact_help_notes_markdown_only hard-codes the external
"uv run" invocation, making it unreliable if uv isn't on PATH; update the
subprocess.run call in that test to use the current Python interpreter module
invocation (e.g., replace ["uv", "run", "claude-code-log", "--help"] with
[sys.executable, "-m", "claude_code_log", "--help"] and add import sys at the
top), or alternatively add a guard using shutil.which("uv") or
pytest.importorskip("uv") to skip when uv is unavailable; change the command in
the subprocess.run call and add the corresponding import/guard inside the test.
- Around line 471-551: The test imports and calls the private helper
_ensure_schema_version_table from migrations.runner; change the test to use a
public API instead by removing the _ensure_schema_version_table import and
either (a) call an existing public migration entrypoint that initializes the
schema (use apply_migration to bring DB to a baseline if such an entrypoint
exists) or (b) if no public initializer exists, add a non-underscored helper
ensure_schema_version_table in migrations.runner and call that from the test;
update the test to reference the public symbol ensure_schema_version_table or
rely solely on apply_migration(conn, ...) to set up the schema rather than
touching the private _ensure_schema_version_table.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| def _make_assistant(uuid: str, session_id: str, ts: str, parent: str) -> dict: | ||
| return { | ||
| "type": "assistant", | ||
| "timestamp": ts, | ||
| "parentUuid": parent, | ||
| "isSidechain": False, | ||
| "userType": "human", | ||
| "cwd": "/tmp", | ||
| "sessionId": session_id, | ||
| "version": "1.0.0", | ||
| "uuid": uuid, | ||
| "requestId": f"req_{uuid}", | ||
| "message": { | ||
| "id": uuid, | ||
| "type": "message", | ||
| "role": "assistant", | ||
| "model": "claude-3-sonnet", | ||
| "content": [{"type": "text", "text": "reply"}], | ||
| "stop_reason": "end_turn", | ||
| "usage": {"input_tokens": 10, "output_tokens": 5}, | ||
| }, | ||
| } |
There was a problem hiding this comment.
_make_assistant sets userType: "human" on assistant entries.
Minor semantic inconsistency in the fixture — assistant messages typically wouldn't be tagged as userType: "human". Harmless for the path/caching assertions in this file, but if these fixtures get reused in tests that filter/branch on userType, it could produce misleading results.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/test_output_paths.py` around lines 154 - 175, The fixture function
_make_assistant incorrectly sets "userType" to "human"; change it to the correct
value for assistant entries (e.g., "assistant") so assistant messages reflect
their actual type. Locate the _make_assistant function and update the "userType"
field from "human" to the appropriate assistant identifier to avoid misleading
behavior in tests that branch on userType.
CodeRabbit flagged the fixture mismatch: `_make_assistant` (and `_make_user`) set `userType: "human"`, but real Claude Code JSONL files tag every entry with `userType: "external"` regardless of role. Harmless for the path/caching assertions these helpers support in this file, but a misleading default if they get reused in tests that branch on userType. Fix both helpers. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/test_output_paths.py (1)
396-408: Consider using Click'sCliRunnerinstead of auv runsubprocess.Invoking
uv run claude-code-log --helpviasubprocesscouples this test to the presence of theuvbinary and an installed console script in the test environment. Ifuvisn't on PATH (some CI matrices, editable-only dev shells, or contributors using plainpip/venv),subprocess.run(..., check=True)will raiseFileNotFoundError/CalledProcessErrorrather than producing a useful help-text regression signal.Since this is purely a help-text regression,
click.testing.CliRunnerinvoking the CLI entry point directly is faster, hermetic, and avoids the tool dependency:♻️ Suggested refactor
- def test_compact_help_notes_markdown_only(self) -> None: - result = subprocess.run( - ["uv", "run", "claude-code-log", "--help"], - capture_output=True, - text=True, - check=True, - ) - # Combined text: flatten whitespace since click wraps help lines. - flat = re.sub(r"\s+", " ", result.stdout) + def test_compact_help_notes_markdown_only(self) -> None: + from click.testing import CliRunner + from claude_code_log.cli import main # adjust to the actual entry point + + result = CliRunner().invoke(main, ["--help"]) + assert result.exit_code == 0 + flat = re.sub(r"\s+", " ", result.output) assert "--compact" in flat assert "Markdown-only" in flat, ( f"Expected 'Markdown-only' note in --compact help; got:\n{flat}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/test_output_paths.py` around lines 396 - 408, The test method test_compact_help_notes_markdown_only currently shells out with subprocess to run "uv run claude-code-log --help"; replace that with click.testing.CliRunner to invoke the CLI entry point directly (e.g., runner = CliRunner(); result = runner.invoke(cli, ["--help"])), assert result.exit_code == 0, read text from result.output, then apply the same whitespace flattening and assertions for "--compact" and "Markdown-only". Import CliRunner from click.testing and the CLI entry point (the click Group/function used as the console script) at the top of the test file so the test is hermetic and no longer depends on the "uv" binary or installed console script.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/test_output_paths.py`:
- Around line 396-408: The test method test_compact_help_notes_markdown_only
currently shells out with subprocess to run "uv run claude-code-log --help";
replace that with click.testing.CliRunner to invoke the CLI entry point directly
(e.g., runner = CliRunner(); result = runner.invoke(cli, ["--help"])), assert
result.exit_code == 0, read text from result.output, then apply the same
whitespace flattening and assertions for "--compact" and "Markdown-only". Import
CliRunner from click.testing and the CLI entry point (the click Group/function
used as the console script) at the top of the test file so the test is hermetic
and no longer depends on the "uv" binary or installed console script.
CodeRabbit flagged that shelling out with `subprocess.run(["uv", "run", "claude-code-log", "--help"])` couples the test to the presence of `uv` on PATH and an installed console script. Some CI matrices, editable-only dev shells, and plain pip/venv setups wouldn't have either, so the test would raise `FileNotFoundError` / `CalledProcessError` instead of giving a help-text regression signal. `click.testing.CliRunner` invokes the entry point in-process: hermetic, faster, no external tool dependency. Drop the `subprocess` import too. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/test_output_paths.py`:
- Around line 505-550: Seed html_pages with a non-default id (not the implicit
1) before inserting page_sessions and run the migration via
apply_migration(conn, 4, mig_004); then assert that the specific page id
returned from "SELECT id FROM html_pages LIMIT 1" is unchanged and that
page_sessions.page_id still matches it; finally run "PRAGMA foreign_key_check"
on conn and assert it returns no rows to validate foreign key integrity between
html_pages and page_sessions after migration 004 (also confirm variant_suffix as
before).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| conn.execute( | ||
| "INSERT INTO html_pages " | ||
| "(project_id, page_number, html_path, page_size_config, message_count, " | ||
| " first_session_id, last_session_id, generated_at, library_version) " | ||
| "VALUES (?, 1, 'combined_transcripts.html', 2000, 10, 's1', 's1', " | ||
| " datetime('now'), '0.0.1')", | ||
| (project_id,), | ||
| ) | ||
| page_id = conn.execute("SELECT id FROM html_pages LIMIT 1").fetchone()[0] | ||
| conn.execute( | ||
| "INSERT INTO page_sessions (page_id, session_id, session_order) " | ||
| "VALUES (?, 's1', 0)", | ||
| (page_id,), | ||
| ) | ||
| conn.commit() | ||
|
|
||
| before_pages = conn.execute("SELECT COUNT(*) FROM html_pages").fetchone()[0] | ||
| before_page_sessions = conn.execute( | ||
| "SELECT COUNT(*) FROM page_sessions" | ||
| ).fetchone()[0] | ||
| assert before_pages == 1 | ||
| assert before_page_sessions == 1 | ||
|
|
||
| # Apply migration 004 — the blocker was that this silently | ||
| # cascade-deleted every page_sessions row. | ||
| mig_004 = next(migrations_dir.glob("004_*.sql")) | ||
| apply_migration(conn, 4, mig_004) | ||
|
|
||
| after_pages = conn.execute("SELECT COUNT(*) FROM html_pages").fetchone()[0] | ||
| after_page_sessions = conn.execute( | ||
| "SELECT COUNT(*) FROM page_sessions" | ||
| ).fetchone()[0] | ||
|
|
||
| assert after_pages == 1, "html_pages row must survive the table recreate" | ||
| assert after_page_sessions == 1, ( | ||
| f"page_sessions row must survive migration 004 " | ||
| f"(was {before_page_sessions}, now {after_page_sessions}) — " | ||
| "PRAGMA foreign_keys toggle missing?" | ||
| ) | ||
|
|
||
| # Confirm the new variant_suffix column is present and the | ||
| # existing row has the default '' variant. | ||
| row = conn.execute( | ||
| "SELECT variant_suffix FROM html_pages LIMIT 1" | ||
| ).fetchone() | ||
| assert row[0] == "" |
There was a problem hiding this comment.
Assert FK validity, not just surviving row counts.
This regression can still pass if migration 004 preserves one page_sessions row but recreates html_pages with different ids, leaving an orphaned page_sessions.page_id. Seed a non-default page id and assert both id preservation and PRAGMA foreign_key_check.
🧪 Proposed test tightening
+ page_id = 42
conn.execute(
"INSERT INTO html_pages "
- "(project_id, page_number, html_path, page_size_config, message_count, "
+ "(id, project_id, page_number, html_path, page_size_config, message_count, "
" first_session_id, last_session_id, generated_at, library_version) "
- "VALUES (?, 1, 'combined_transcripts.html', 2000, 10, 's1', 's1', "
+ "VALUES (?, ?, 1, 'combined_transcripts.html', 2000, 10, 's1', 's1', "
" datetime('now'), '0.0.1')",
- (project_id,),
+ (page_id, project_id),
)
- page_id = conn.execute("SELECT id FROM html_pages LIMIT 1").fetchone()[0]
conn.execute(
"INSERT INTO page_sessions (page_id, session_id, session_order) "
"VALUES (?, 's1', 0)",
(page_id,),
)
@@
row = conn.execute(
"SELECT variant_suffix FROM html_pages LIMIT 1"
).fetchone()
assert row[0] == ""
+
+ after_page_id = conn.execute(
+ "SELECT id FROM html_pages LIMIT 1"
+ ).fetchone()[0]
+ page_session_page_id = conn.execute(
+ "SELECT page_id FROM page_sessions LIMIT 1"
+ ).fetchone()[0]
+ assert after_page_id == page_id
+ assert page_session_page_id == page_id
+ assert conn.execute("PRAGMA foreign_key_check").fetchall() == []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/test_output_paths.py` around lines 505 - 550, Seed html_pages with a
non-default id (not the implicit 1) before inserting page_sessions and run the
migration via apply_migration(conn, 4, mig_004); then assert that the specific
page id returned from "SELECT id FROM html_pages LIMIT 1" is unchanged and that
page_sessions.page_id still matches it; finally run "PRAGMA foreign_key_check"
on conn and assert it returns no rows to validate foreign key integrity between
html_pages and page_sessions after migration 004 (also confirm variant_suffix as
before).
The detail-level filter and compact-Markdown mode landed in #96 and #114 but weren't surfaced in the README. Add: - Key Features bullet calling out the pairing with --format md for feeding past conversations to an LLM - New Usage subsection "Feeding Past Conversations to an LLM" with the --detail low --format md --compact combo and a short level reference - Problem-statement entry for LLM-based analysis/experience building - Markdown Output Features bullet for --compact Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…115) * Preserve agentId anchors in parallel-Task stitch Parallel `Task` tool_uses emit a chain of assistant tool_use entries whose sibling tool_result anchors each reference a different subagent via agentId. Variant 2 of `_stitch_tool_results` classified the intermediate assistant subtrees as "dead-end" and let the walker mark their descendants — including agentId-bearing tool_results — as skipped, so only the outermost subagent session had an attachment point the traversal could find. Fix: add `_collect_agent_anchors` to walk an assistant subtree for UserTranscriptEntry descendants with agentId, and surface those anchors into the stitch result alongside the assistant dead-ends. The anchors still end up in `skipped` via `_collect_descendants` (since they're descendants of the assistant), but now they also live in the DAG-line's chain — which is what `build_session_tree` reads when choosing attachment points for subagent sessions. Regression tests cover the 2-teammate case, 3-teammate case (the experiments/worktrees shape), and an end-to-end splice verifying a subagent entry is actually reached through the anchor. Fixes issue #91 symptom: only 2 of 6 subagents rendering in a teammates session. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * Restore collapse affordance on expanded tool-param details The generic tool renderer (render_params_table) wraps long parameter values in <details class='tool-param-collapsible'>. The CSS hid the entire <summary> on open, which also removed the browser's default disclosure arrow — leaving no visible way to collapse the expanded value. Wrap the preview text in <span class='tool-param-preview'> so the [open] rule can hide just the preview, not the whole summary. The summary (with its list-item marker / arrow) stays visible, and a "collapse" hint via ::after makes the click target obvious. Covers both call sites in tool_formatters.py: structured (JSON) and simple (string) long values. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * Advertise --detail / --compact in README The detail-level filter and compact-Markdown mode landed in #96 and #114 but weren't surfaced in the README. Add: - Key Features bullet calling out the pairing with --format md for feeding past conversations to an LLM - New Usage subsection "Feeding Past Conversations to an LLM" with the --detail low --format md --compact combo and a short level reference - Problem-statement entry for LLM-based analysis/experience building - Markdown Output Features bullet for --compact Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
Addresses CodeRabbit's staleness concern on PR #96: the regeneration
early-exit used to ignore
--detail/--compact, so switching flags onthe same output path could serve stale content. This PR fixes the root
cause by giving each variant its own filename, its own cache bucket, and
its own mtime — staleness per variant dissolves.
Naming scheme
--detail full(default) → unchanged:combined_transcripts.html,session-{id}.html.--detail {high,low,minimal}→combined_transcripts.low.html,session-{id}.low.html.--compact(Markdown only) adds.compactbefore the extension:combined_transcripts.low.compact.md.combined_transcripts.low_2.html.-o custom.htmlhonours the user's literal path anddoesn't write a cache row (one-off exports shouldn't own cache slots).
utils.variant_suffix(detail, compact, format)is the single helper;VARIANT_ENTRY_REmatches page-1 entry points for index enumeration.Migration 004 — variant-aware
html_pagesMigration 003's
UNIQUE(project_id, page_number)let a second-variantrender clobber the first's cache rows via
ON CONFLICT, andget_page_count()counted across variants (breaking orphan cleanup).Migration 004 recreates
html_pageswith avariant_suffixcolumn andUNIQUE(project_id, variant_suffix, page_number), preservingids sopage_sessions.page_idforeign keys stay intact. The recreate iswrapped in
PRAGMA foreign_keys = OFF/ON+PRAGMA foreign_key_checkso an existing paginated cache doesn't cascade-delete
page_sessionsrows on first post-upgrade run.
Cache API (
update_page_cache,is_page_stale,get_page_count,get_page_data) gains avariant_suffix: str = ""kwarg; everyexisting caller keeps working at the default.
Multi-variant index (Option A)
_enumerate_project_variantsglobs each project dir forcombined_transcripts*.htmlentry points, parses the variant suffixinto a human label, and emits an
html_variantslist. Theindex.htmltemplate renders a small "Variants:" row under theproject card when a project has ≥2 variants. Single-variant projects
look unchanged. Both live and archived project branches populate it.
Commits
Ordered for clean bisect (schema floor first, then helpers, then
threading, then tests):
variant_suffixkwargs defaulted to
"".variant_suffixhelper + regex —utils.pyprimitives.orphan cleanup, cache writes all variant-scoped; stale force-regen
clauses (
or detail != FULL or compact) dropped now that theper-variant cache row is authoritative.
--compacthelp +update_cachekwarg — help text notesMarkdown-only; CLI passes
update_cache=(output is None)soexplicit
-oexports don't own a cache row.to their own variant's combined file.
pyright regressions (renamed
_VARIANT_ENTRY_RE→VARIANT_ENTRY_RE,simplified redundant
isinstance), and a paginated-integration test(force pagination via
page_size=2, exercise FULL → LOW → FULLcache-hit behavior).
Test coverage
test/test_output_paths.py(36 new tests):variant_suffixmatrix (7 cases).VARIANT_ENTRY_REaccept/reject matrix (10 cases)._get_page_html_pathcomposition.explicit
-o, single-file input,--session-idexport.hits cache, neither variant deletes the other's files.
_enumerate_project_variantsenumeration.--compacthelp regression.get_page_data,get_page_count.via the real
runner.apply_migration, assertspage_sessionssurvives. Fails fast if the PRAGMA toggle is removed.
page_size=2, renders FULL → LOW → FULL, asserts all page filescoexist and the second FULL render is a cache hit.
Full suite: 934 passed, 7 skipped (unit). TUI 66, browser 29,
integration 69. All lint/format/type checks clean.
Test plan
(multi-page combined transcript): first post-upgrade render should
keep session mappings intact (migration 004 FK toggle).
claude-code-log --detail low some-project/producescombined_transcripts.low.htmlalongside any existingcombined_transcripts.html, neither touching the other.claude-code-log --detail low --compact --format md some-project/produces
combined_transcripts.low.compact.md.variants.
session-{id}.low.htmllands oncombined_transcripts.low.html, not the bare default.🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
--compacthelp: “Markdown-only — a no-op for HTML.”Tests
Chores