Add HF-only LoRA training path without llm-foundry dependencies#29
Add HF-only LoRA training path without llm-foundry dependencies#29
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 99c698edae
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| model.save_pretrained(output_dir) | ||
| tokenizer.save_pretrained(output_dir) |
There was a problem hiding this comment.
Guard LoRA artifact writes to the main process
When this script is launched in distributed mode (for example via torchrun), every rank executes model.save_pretrained(...) and tokenizer.save_pretrained(...) to the same output_dir, which can race and produce corrupted or inconsistent adapter artifacts. The save paths should be gated behind a world-process-zero check (and similarly for the merged save block) so only one process writes final files.
Useful? React with 👍 / 👎.
No description provided.