Skip to content

Commit c981d6a

Browse files
authored
Merge pull request #4 from 8bitkick/patch-4
Update getting-started.md
2 parents 8769214 + 0cecd9d commit c981d6a

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

docs/getting-started.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@
44

55
![alt text](https://github.com/uTensor/uTensor/raw/master/docs/img/uTensorFlow.jpg "flow graph")
66

7-
uTensor is as small as 2kB. It supports multiple memory planning strategies and integrates well with optimized computational kernels, for example, CMSIS-NN from Arm. The uTensor C++ runtime interfaces are clear and designed specific for embedded ML. The uTensor Python SDK is designed for customizability from the ground up. Hardware and software designers can take advantage of the extensbility uTensor offers to prototype and deploy their solutions.
7+
uTensor is as small as 2kB. It supports multiple memory planning strategies and integrates well with optimized computational kernels, for example, CMSIS-NN from Arm. The uTensor C++ runtime interfaces are clear and designed specifically for embedded ML. The uTensor Python SDK is customizable from the ground up. Hardware and software designers can take advantage of the extensbility uTensor offers to prototype and deploy their solutions.
88

9-
We find the code-generation is a good balance weighting all the trade-offs above.
9+
We find that code-generation is a good balance weighing all the trade-offs above.
1010

1111
The rest of the tutorial presents the steps to set up your environment and deploy your first model with uTensor.
1212

@@ -21,7 +21,7 @@
2121

2222

2323
## Environment Setup
24-
This tutorial focus on the instructions for MacOS; however, other operating systems follow very similar steps.
24+
This tutorial focuses on the instructions for MacOS; however other operating systems follow very similar steps.
2525

2626
### Install Brew
2727
Brew is a user-space package manager for MacOS. In the terminal, enter:
@@ -31,9 +31,9 @@ $ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/
3131
Other systems use different package managers, for example, `apt` on Ubuntu Linux.
3232

3333
### Install and Setup Python
34-
We should never use the system's Python for our developments. Using a Python virtual environment for our TinyML development is a better practice. A dedicated Python environment to protects the system's Python and to keep our package dependencies manageable.
34+
Using a Python virtual environment for our TinyML development is good practice. A dedicated Python environment protects the system's Python and keeps our package dependencies manageable.
3535
#### Install [`pyenv`](https://github.com/pyenv/pyenv)
36-
`pyenv` is a nice software that makes switching between versions of Python runtime frictionless. It is installed with:
36+
`pyenv` is a nice tool that makes switching between versions of Python runtime frictionless. It is installed with:
3737
```bash
3838
$ brew update
3939
$ brew install pyenv
@@ -64,7 +64,7 @@ Python 3.6.8
6464
$ echo 'alias ut="source ~/.pyvenv/ut/bin/activate"' >> ~/.zshrc
6565
$ source ~/.zshrc
6666
```
67-
Activating and de-activating a virtual environment:
67+
Activate and de-activate a virtual environment:
6868
```
6969
# Activate it
7070
$ ut
@@ -121,7 +121,7 @@ Here's the content of the repository:
121121
│   └── my_model.hpp
122122
└── uTensor.lib
123123
```
124-
The Jupyter-notebook, [mnist_conv.ipynb](https://github.com/uTensor/utensor-helloworld/blob/master/mnist_conv.ipynb), hosts the training code and uses the uTensor API, which generates C++ code from the trained model. For simplicity, the project already contains the generated C++ code in the `constant` and `models` folders, so they are ready to be compiled. These pre-generated code will be overwritten after you run the notebook in the next section.
124+
The Jupyter notebook [mnist_conv.ipynb](https://github.com/uTensor/utensor-helloworld/blob/master/mnist_conv.ipynb) hosts the training code and uses the uTensor API to generate C++ code from the trained model. For simplicity, the project already contains the generated C++ code in the `constant` and `models` folders, so they are ready to be compiled. These pre-generated code will be overwritten after you run the notebook in the next section.
125125

126126

127127

@@ -135,7 +135,7 @@ The Jupyter-notebook is launched from the project root:
135135
```
136136

137137
### Defining the Model
138-
We defined a convulutional neural network with less than 5kB (after quantization) of parameters:
138+
We defined a convolutional neural network with less than 5kB (after quantization) of parameters:
139139
```python
140140
class MyModel(Model):
141141
def __init__(self):
@@ -379,15 +379,15 @@ Total params: 4,874
379379
Trainable params: 4,874
380380
Non-trainable params: 0
381381
```
382-
The total number of parameters is around 4,874. Because model parameters are typically constants for inferencing's consideration, they are stored in the ROM of your device.
382+
The total number of parameters is around 4,874. Because model parameters are typically constants during inferencing they are stored in the ROM of your device.
383383

384-
Activations, on the other hand, may change through every inference cycle; thus, they are placed in RAM. For sequential model, a simple metric to estimate the RAM usage is by looking the combined size of the input and output of a layer at a given time.
384+
Activations on the other hand may change every inference cycle thus they are placed in RAM. For sequential model, a simple metric to estimate the RAM usage is by looking the combined size of the input and output of a layer at a given time.
385385

386386
## Conclusion
387-
Congratulation on completing this example. This tutorial covers quite a bit of information and is quite advanced. Stay tuned for more writings on TinyML to come. We will be bringing you content on not only the deployment of embedded ML models but also how to extend uTensor to do exactly what you want, such as node-fusion, adding operators, custom memory plans, data collection, etc.
387+
Congratulations on completing this example. This tutorial covers quite a bit of information and is quite advanced. Stay tuned for more writings on TinyML to come. We will be bringing you content on not only the deployment of embedded ML models but also how to extend uTensor to do exactly what you want, such as node-fusion, adding operators, custom memory plans, data collection, etc.
388388

389389
Also, there are many ways you can help the project, for example:
390390
#### Star the Projects
391391
[Starring the project](https://github.com/uTensor/uTensor) is a great way to recognize our work and support the community. Please help us to spread the words!
392392
#### Join us on Slack
393-
Our [Slack workspace](https://join.slack.com/t/utensor/shared_invite/zt-6vf9jocy-lzk5Aw11Z8M9GPf_KS5I~Q) is full of discussions on the latest ideas and development in uTensor. If you have questions, ideas, or want to get involved in the project, Slack is a great place to start.
393+
Our [Slack workspace](https://join.slack.com/t/utensor/shared_invite/zt-6vf9jocy-lzk5Aw11Z8M9GPf_KS5I~Q) is full of discussions on the latest ideas and development in uTensor. If you have questions, ideas, or want to get involved in the project, Slack is a great place to start.

0 commit comments

Comments
 (0)