Conversation
60b9236 to
c42d7c2
Compare
c42d7c2 to
d59e6eb
Compare
tonyxiao
left a comment
There was a problem hiding this comment.
Looks good — no blockers. A few inline notes for future reference.
| // rowAssignments (it no longer lands at the bottom). | ||
| let donated = 0 | ||
| while (donated < deleteList.length && appends.length > 0) { | ||
| const targetRow = deleteList[donated] |
There was a problem hiding this comment.
Nit: appends.shift()! is O(n) per call (shifts the whole array). For typical Sheets batch sizes this is fine, but if it ever gets hot, switch to an index pointer (let donorIdx = 0; const donor = appends[donorIdx++]) and slice the remaining appends once at the end.
| v == null ? '' : String(v) | ||
| ) | ||
| bufferedUpdates.push({ rowNumber: deletedRow, values: donorValues }) | ||
| bufferedUpdates.push({ rowNumber: donorRow, values: blankRow }) |
There was a problem hiding this comment.
FYI: When a survivor is swapped from its original position into the deleted slot, its rowKey isn't recorded in rowAssignments. The service layer's cached row map will still point it to the old (now blank) position.
Self-healing — next cycle's read-before-flush dedup resolves it — so not blocking. If first-write-after-delete accuracy matters later, reverse-lookup the donor's rowKey from freshMap and emit an assignment at the new position here.
| bufferedUpdates.push({ rowNumber: deletedRow, values: blankRow }) | ||
| blanked++ | ||
| } | ||
| } else if (K > 0 && isNarrow) { |
There was a problem hiding this comment.
Good defensive fallback. If this warn ever fires it means the narrow-read suppression logic above drifted — worth investigating at that point rather than just silently blanking.
Summary
Before this PR, the Google Sheets destination would update the row with a field 'deleted=true', this is changed now to actually remove the row from the table.
Two things were broken end-to-end before I could test the destination with live deletes so this PR depends on.
#331 #332
How delete handling works
Records with cleanData['deleted'] === true go into a new per-stream deleteBuffers map, alongside the existing appendBuffers and updateBuffers.
Phase 1, donate appends. For each deleted row, pop a pending append and write it as an update at the deleted slot. No new row at the bottom.
Phase 2, tail swap. When deletes outnumber appends, take the bottom-most surviving rows and pair each body-delete with one of them: write the donor's values into the deleted row, blank the donor's original row. Deletes that already fall inside the trailing range are blanked in place. Result: Data stay contiguous at the top, blanks only appear at the bottom.
Also added an in-batch reconciliation step: if a pending append and a delete share a rowKey (typical when customer.created and customer.deleted arrive in the same poll cycle), drop both. The row was never in the sheet, the delete has nothing to target, and appending just to compact would be wasted work.
Tests
Added a describe('delete handling') block with 15 cases covering routing, both compaction phases, in-batch reconciliation, multi-stream isolation, composite primary keys, and a no-gap invariant. A
pnpm --filter @stripe/sync-destination-google-sheets exec vitest run src/index.test.ts -t 'delete handling'