Skip to content
This repository was archived by the owner on Jul 7, 2023. It is now read-only.

Commit 8839cf9

Browse files
committed
better walkthrough README
The default batch_size for transformer_base_single_gpu has been decreased to 2048 in transform.py (it used to be 4096 I think). So if it is too much, the user must use a smaller value.
1 parent 1bf3b44 commit 8839cf9

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ t2t-datagen \
8989
--problem=$PROBLEM
9090
9191
# Train
92-
# * If you run out of memory, add --hparams='batch_size=2048' or even 1024.
92+
# * If you run out of memory, add --hparams='batch_size=1024'.
9393
t2t-trainer \
9494
--data_dir=$DATA_DIR \
9595
--problems=$PROBLEM \
@@ -166,7 +166,7 @@ python -c "from tensor2tensor.models.transformer import Transformer"
166166
with `Modality` objects, which are specified per-feature in the dataset/task
167167
specification.
168168
* Support for multi-GPU machines and synchronous (1 master, many workers) and
169-
asynchrounous (independent workers synchronizing through a parameter server)
169+
asynchronous (independent workers synchronizing through a parameter server)
170170
[distributed training](https://github.com/tensorflow/tensor2tensor/tree/master/docs/distributed_training.md).
171171
* Easily swap amongst datasets and models by command-line flag with the data
172172
generation script `t2t-datagen` and the training script `t2t-trainer`.

0 commit comments

Comments
 (0)