Running an AI agent on Galaxy Watch with a 2.8 MB binary #17998
ThinkOffApp
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Sharing a real-world example of on-device AI on wearable hardware that might interest the ExecuTorch community.
ClawWatch (https://github.com/ThinkOffApp/ClawWatch) runs an OpenClaw AI agent natively on Galaxy Watch. The on-device component is NullClaw, a 2.8 MB static ARM binary written in Zig that handles the agent runtime, paired with Vosk for offline speech recognition. LLM inference still goes through a network gateway, but the agent logic, voice capture, and TTS all run locally on the watch.
We chose Zig over a PyTorch-based approach because the total binary size budget on a watch is extremely tight and we needed a single static binary with no runtime dependencies. The 2.8 MB footprint includes the full agent protocol stack.
Not using ExecuTorch directly, but thought this use case (persistent AI agent on wearable hardware) might be relevant to the community's work on edge deployment.
Beta Was this translation helpful? Give feedback.
All reactions