Skip to content

Comments

Add HF-only LoRA training path without llm-foundry dependencies#29

Open
dalistarh wants to merge 1 commit intomainfrom
codex/lora-only-hf
Open

Add HF-only LoRA training path without llm-foundry dependencies#29
dalistarh wants to merge 1 commit intomainfrom
codex/lora-only-hf

Conversation

@dalistarh
Copy link
Contributor

No description provided.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 99c698edae

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +328 to +329
model.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Guard LoRA artifact writes to the main process

When this script is launched in distributed mode (for example via torchrun), every rank executes model.save_pretrained(...) and tokenizer.save_pretrained(...) to the same output_dir, which can race and produce corrupted or inconsistent adapter artifacts. The save paths should be gated behind a world-process-zero check (and similarly for the merged save block) so only one process writes final files.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant