You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
uTensor is as small as 2kB. It supports multiple memory planning strategies and integrates well with optimized computational kernels, for example, CMSIS-NN from Arm. The uTensor C++ runtime interfaces are clear and designed specific for embedded ML. The uTensor Python SDK is designed for customizability from the ground up. Hardware and software designers can take advantage of the extensbility uTensor offers to prototype and deploy their solutions.
7
+
uTensor is as small as 2kB. It supports multiple memory planning strategies and integrates well with optimized computational kernels, for example, CMSIS-NN from Arm. The uTensor C++ runtime interfaces are clear and designed specifically for embedded ML. The uTensor Python SDK is customizable from the ground up. Hardware and software designers can take advantage of the extensbility uTensor offers to prototype and deploy their solutions.
8
8
9
-
We find the code-generation is a good balance weighting all the trade-offs above.
9
+
We find that code-generation is a good balance weighing all the trade-offs above.
10
10
11
11
The rest of the tutorial presents the steps to set up your environment and deploy your first model with uTensor.
12
12
@@ -21,7 +21,7 @@
21
21
22
22
23
23
## Environment Setup
24
-
This tutorial focus on the instructions for MacOS; however, other operating systems follow very similar steps.
24
+
This tutorial focuses on the instructions for MacOS; however other operating systems follow very similar steps.
25
25
26
26
### Install Brew
27
27
Brew is a user-space package manager for MacOS. In the terminal, enter:
Other systems use different package managers, for example, `apt` on Ubuntu Linux.
32
32
33
33
### Install and Setup Python
34
-
We should never use the system's Python for our developments. Using a Python virtual environment for our TinyML development is a better practice. A dedicated Python environment to protects the system's Python and to keep our package dependencies manageable.
34
+
Using a Python virtual environment for our TinyML development is good practice. A dedicated Python environment protects the system's Python and keeps our package dependencies manageable.
Activating and de-activating a virtual environment:
67
+
Activate and de-activate a virtual environment:
68
68
```
69
69
# Activate it
70
70
$ ut
@@ -121,7 +121,7 @@ Here's the content of the repository:
121
121
│ └── my_model.hpp
122
122
└── uTensor.lib
123
123
```
124
-
The Jupyter-notebook,[mnist_conv.ipynb](https://github.com/uTensor/utensor-helloworld/blob/master/mnist_conv.ipynb), hosts the training code and uses the uTensor API, which generates C++ code from the trained model. For simplicity, the project already contains the generated C++ code in the `constant` and `models` folders, so they are ready to be compiled. These pre-generated code will be overwritten after you run the notebook in the next section.
124
+
The Jupyternotebook [mnist_conv.ipynb](https://github.com/uTensor/utensor-helloworld/blob/master/mnist_conv.ipynb) hosts the training code and uses the uTensor API to generate C++ code from the trained model. For simplicity, the project already contains the generated C++ code in the `constant` and `models` folders, so they are ready to be compiled. These pre-generated code will be overwritten after you run the notebook in the next section.
125
125
126
126
127
127
@@ -135,7 +135,7 @@ The Jupyter-notebook is launched from the project root:
135
135
```
136
136
137
137
### Defining the Model
138
-
We defined a convulutional neural network with less than 5kB (after quantization) of parameters:
138
+
We defined a convolutional neural network with less than 5kB (after quantization) of parameters:
139
139
```python
140
140
classMyModel(Model):
141
141
def__init__(self):
@@ -379,15 +379,15 @@ Total params: 4,874
379
379
Trainable params: 4,874
380
380
Non-trainable params: 0
381
381
```
382
-
The total number of parameters is around 4,874. Because model parameters are typically constants for inferencing's consideration, they are stored in the ROM of your device.
382
+
The total number of parameters is around 4,874. Because model parameters are typically constants during inferencing they are stored in the ROM of your device.
383
383
384
-
Activations, on the other hand, may change through every inference cycle; thus, they are placed in RAM. For sequential model, a simple metric to estimate the RAM usage is by looking the combined size of the input and output of a layer at a given time.
384
+
Activations on the other hand may change every inference cycle thus they are placed in RAM. For sequential model, a simple metric to estimate the RAM usage is by looking the combined size of the input and output of a layer at a given time.
385
385
386
386
## Conclusion
387
-
Congratulation on completing this example. This tutorial covers quite a bit of information and is quite advanced. Stay tuned for more writings on TinyML to come. We will be bringing you content on not only the deployment of embedded ML models but also how to extend uTensor to do exactly what you want, such as node-fusion, adding operators, custom memory plans, data collection, etc.
387
+
Congratulations on completing this example. This tutorial covers quite a bit of information and is quite advanced. Stay tuned for more writings on TinyML to come. We will be bringing you content on not only the deployment of embedded ML models but also how to extend uTensor to do exactly what you want, such as node-fusion, adding operators, custom memory plans, data collection, etc.
388
388
389
389
Also, there are many ways you can help the project, for example:
390
390
#### Star the Projects
391
391
[Starring the project](https://github.com/uTensor/uTensor) is a great way to recognize our work and support the community. Please help us to spread the words!
392
392
#### Join us on Slack
393
-
Our [Slack workspace](https://join.slack.com/t/utensor/shared_invite/zt-6vf9jocy-lzk5Aw11Z8M9GPf_KS5I~Q) is full of discussions on the latest ideas and development in uTensor. If you have questions, ideas, or want to get involved in the project, Slack is a great place to start.
393
+
Our [Slack workspace](https://join.slack.com/t/utensor/shared_invite/zt-6vf9jocy-lzk5Aw11Z8M9GPf_KS5I~Q) is full of discussions on the latest ideas and development in uTensor. If you have questions, ideas, or want to get involved in the project, Slack is a great place to start.
0 commit comments