Serving the purposes of reinforcement learning, control, and adaptation, world models predict future observations from high-dimensional sensor data (e.g., camera images). Their learned latent representations often function as black boxes without a clear connection to the underlying physical states, which makes it difficult to provide strong guarantees based on such world models. This project demos Physically Interpretable World Models (PIWM), a novel architecture that aligns learned latent representations with real-world physical quantities.
forked from MrinallU/World-Model-Visualizer
-
Notifications
You must be signed in to change notification settings - Fork 0
Demos the efficacy of physically interpretable world model generation in an online environement. Utlizes React to load/control onnx models and simulate cartpole dynamics for testing.
Trustworthy-Engineered-Autonomy-Lab/World-Model-Visualizer
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
Demos the efficacy of physically interpretable world model generation in an online environement. Utlizes React to load/control onnx models and simulate cartpole dynamics for testing.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- JavaScript 98.3%
- HTML 1.4%
- CSS 0.3%