Conversation
Removed commented-out print statements for clarity.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the LTX2.3 model configurations and significantly refactors the Gemma text encoder architecture for improved modularity and new features. It integrates advanced attention mechanisms like gated attention and cross-attention AdaLN, along with a flexible multi-modal guider for enhanced control during inference. Additionally, the audio processing pipeline has been upgraded to include Bandwidth Extension for higher fidelity audio output. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for LTX-2.3, which includes a significant refactoring of the Gemma text encoder pipeline, the addition of new features like gated attention and a multi-modal guider, and an updated vocoder with bandwidth extension. The changes are extensive and generally improve the codebase's modularity and capabilities. My review identified a potential double-normalization bug in the attention mechanism and a minor case of code duplication in the configuration loading logic. The feedback provided aims to address these points for improved correctness and maintainability.
| if is_self_attn or self.apply_gated_attention: | ||
| q_in = x | ||
| else: | ||
| q_in = self.norm_infer_func(x, weight=None, bias=None, eps=1e-6) |
There was a problem hiding this comment.
There appears to be a potential double normalization issue for the query input (q_in). The call sites for cross-attention already provide a normalized x tensor. However, in the case where is_self_attn is false and self.apply_gated_attention is false, q_in is normalized again via self.norm_infer_func(x, ...). This could lead to incorrect attention outputs.
It seems the caller is always responsible for passing a correctly normalized tensor. To fix this and simplify the logic, you can remove this conditional assignment and always set q_in = x.
q_in = x
lightx2v/utils/set_config.py
Outdated
| # LTX-2 / HuggingFace: root config.json may nest transformer under "transformer". | ||
| # LightX2V DiT expects num_layers, rope_type, etc. on the root config. | ||
| _nested_transformer = config.get("transformer") | ||
| if isinstance(_nested_transformer, dict): | ||
| config.update(_nested_transformer) |
There was a problem hiding this comment.
This logic to flatten the nested transformer config dictionary appears to be duplicated. It's present here in auto_calc_config and also in set_args2config. Since set_config calls set_args2config before auto_calc_config, this logic is executed twice. To avoid redundancy and improve maintainability, you can remove this block from auto_calc_config.
No description provided.