You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -189,6 +189,18 @@ and configurations directly from your code.
189
189
190
190
**For production deployment**, we recommend using ahead-of-time tuned configurations rather than relying on runtime autotuning. The autotuning process can be time-consuming and resource-intensive, making it unsuitable for production environments where predictable performance and startup times are critical.
191
191
192
+
### Static shapes and autotuning keys
193
+
194
+
By default Helion uses static shapes (`static_shapes=True`). This means each unique input shape/stride signature is treated as its own specialization and will be autotuned separately. This typically yields the best performance, but may increase autotuning time when many shapes are encountered.
195
+
196
+
If you want to reduce autotuning time by sharing configurations between different shapes, set `static_shapes=False`. In this mode, the autotuning key ignores exact sizes, allowing a single tuned config to be reused across multiple shapes. This can come with a performance penalty compared to fully specialized static shapes.
197
+
198
+
```python
199
+
@helion.kernel(static_shapes=False)
200
+
defmy_kernel(x: torch.Tensor) -> torch.Tensor:
201
+
...
202
+
```
203
+
192
204
## Configurations
193
205
194
206
Helion configurations include the following options:
0 commit comments