Skip to content

Enhance README with Customization details#1497

Open
ram-cherukuri wants to merge 6 commits intomainfrom
ram-cherukuri-patch-8
Open

Enhance README with Customization details#1497
ram-cherukuri wants to merge 6 commits intomainfrom
ram-cherukuri-patch-8

Conversation

@ram-cherukuri
Copy link
Copy Markdown
Collaborator

Added sections on codebase architecture, customization guide. Included example configuration for adapting GeoTransolver to internal pipe flow problems.

PhysicsNeMo Pull Request

Description

Checklist

Dependencies

Review Process

All PRs are reviewed by the PhysicsNeMo team before merging.

Depending on which files are changed, GitHub may automatically assign a maintainer for review.

We are also testing AI-based code review tools (e.g., Greptile), which may add automated comments with a confidence score.
This score reflects the AI’s assessment of merge readiness and is not a qualitative judgment of your work, nor is
it an indication that the PR will be accepted / rejected.

AI-generated feedback should be reviewed critically for usefulness.
You are not required to respond to every AI comment, but they are intended to help both authors and reviewers.
Please react to Greptile comments with 👍 or 👎 to provide feedback on their accuracy.

Added sections on codebase architecture, customization guide. Included example configuration for adapting GeoTransolver to internal pipe flow problems.
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 13, 2026

Greptile Summary

This PR enhances the README for the transformer_models external aerodynamics example with a "Codebase Architecture" section, a multi-part "Customization Guide", an integration checklist for new CFD domains, and a worked example showing how to adapt GeoTransolver to an internal pipe-flow problem. The additions are helpful for onboarding, but two issues in the sample YAML snippet need fixing before this is ready to merge.

  • [cite_start] / [cite: ...] artifacts in the YAML block (lines 287–308): Six config keys are prefixed with [cite_start] and/or suffixed with [cite: N] markers — AI citation tokens that were not removed before the PR was opened. In YAML, a bare [ opens a flow sequence, so any line starting with [cite_start] will fail to parse.
  • Training config sub-tree doesn't match the real Hydra layout: The sample config places precision, optimizer: "muon", and learning_rate under a training: key. In the actual configs, precision is a root-level key, and the optimizer is configured with Hydra's _target_ pattern (not a string). A copy-paste user would see no effect and be confused.
  • fp8 hardware list omits Blackwell: The new integration checklist says "Hopper/Ada Lovelace" for fp8 support, but the existing README already mentions Hopper, Blackwell, and Ada Lovelace — creating a minor but confusing inconsistency.

Important Files Changed

Filename Overview
examples/cfd/external_aerodynamics/transformer_models/README.md Adds architecture overview, customization guide, and a sample YAML config for pipe flow adaptation — but the YAML snippet contains [cite_start] / [cite: ...] AI-citation artifacts that make it invalid YAML, and the training sub-tree structure doesn't match the actual Hydra config layout.

Last reviewed commit: 4f46f1d

Comment on lines +287 to +308
[cite_start]functional_dim: 4 # Input: Coords (3) + Inlet Velocity/Scalar (1) [cite: 22, 24]
[cite_start]out_dim: 4 # Output: Velocity (Ux, Uy, Uz) + Pressure (p) [cite: 83]
[cite_start]geometry_dim: 3 # 3D internal pipe mesh [cite: 65, 71]
[cite_start]slice_num: 64 # Spatial slices for PhysicsAttention [cite: 71]
[cite_start]n_layers: 6 # Depth of the transformer backbone [cite: 71]
use_te: True # Enable TransformerEngine for speed

# Data Handling & Normalization
data:
resolution: 32768 # Points per GPU for large pipe meshes
normalization_dir: "pipe_stats/" # Path to normalization .npz
return_mesh_features: False # Set to True only for inference
input_dir: "data/pipe_flow/train"
input_dir_val: "data/pipe_flow/val"

# Training Strategy
training:
precision: "bfloat16" # Recommended for stability on modern GPUs
num_epochs: 1000 # Internal flow may require longer convergence
save_interval: 50 # Checkpoint frequency
optimizer: "muon" # Highly recommended for Transolver backbones
[cite_start]learning_rate: 2e-4 # Base LR for the Muon/AdamW optimizer [cite: 37]

This comment was marked as outdated.

Comment on lines +302 to +309
# Training Strategy
training:
precision: "bfloat16" # Recommended for stability on modern GPUs
num_epochs: 1000 # Internal flow may require longer convergence
save_interval: 50 # Checkpoint frequency
optimizer: "muon" # Highly recommended for Transolver backbones
[cite_start]learning_rate: 2e-4 # Base LR for the Muon/AdamW optimizer [cite: 37]

This comment was marked as outdated.


* [ ] **Update Target Labels**: Modify the normalization script if your labels (e.g., Pressure, Shear) differ from the DrivaerML defaults.

* [ ] **Verify Precision Compatibility**: If using **fp8**, ensure your hardware supports it (Hopper/Ada Lovelace).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fp8 hardware list omits Blackwell

The checklist item mentions only "Hopper/Ada Lovelace", but the existing README (Training Precision section, line 62) already states that fp8 is available on "Hopper, Blackwell, Ada Lovelace, and others." Omitting Blackwell here creates an inconsistency that could mislead users with newer hardware.

Suggested change
* [ ] **Verify Precision Compatibility**: If using **fp8**, ensure your hardware supports it (Hopper/Ada Lovelace).
* [ ] **Verify Precision Compatibility**: If using **fp8**, ensure your hardware supports it (Hopper, Blackwell, Ada Lovelace, or other fp8-capable GPUs).

Removed training strategy section from README.
Added detailed architecture overview and customization guide for the Lagrangian MeshGraphNet codebase.
@ram-cherukuri ram-cherukuri requested a review from mnabian as a code owner March 13, 2026 15:27
Copy link
Copy Markdown
Collaborator

@coreyjadams coreyjadams left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ram-cherukuri,

I think the AI used to make these updates went off the rails here. Overall, I don't think we should be using AI to update our readmes like this without closer supervision, they need to be vetted for technical content quite closely still, almost at the level of our example codes.

A lot of the content generated here is speculative at best, and kinda wrong at worst, and we shouldn't merge this.

How do you want to proceed?

Updated README to address inaccuracies
@ram-cherukuri ram-cherukuri requested a review from ktangsali March 16, 2026 16:23

Below is a sample Hydra configuration block (e.g., `conf/model/geotransolver_pipe_flow.yaml`)

### Sample Hydra Configuration: `geotransolver_pipe_flow.yaml`
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RishikeshRanade Can you take a stab a creating a psuedo config that would adapt this external aero sample to an internal flow sample


#### **B. Physics-Informing**

* Builders can Physics Inform their training by injecting custom PDE-based constraints (e.g., mass conservation) as context by using the physicsnemo.physicsinformer module.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ktangsali Can you update this section to indicate how one can add Physics Loss to GeoTransolver training if they want to.

ram-cherukuri and others added 2 commits March 23, 2026 11:19
Clarified configuration names for surface and volumetric data in README.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants