Skip to content

Commit 400b00f

Browse files
authored
[MNT] Fix typos (#1988)
#### What does this implement/fix? Explain your changes. Correct misspellings with https://github.com/crate-ci/typos #### What should a reviewer concentrate their feedback on? Please help me find false positives.
1 parent c78b273 commit 400b00f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+230
-188
lines changed

CHANGELOG.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -45,12 +45,12 @@ Release focusing on:
4545
* [BUG] Fix issue with `EncodeNormalizer(method='standard', center=False)` for scale value by @fnhirwa in https://github.com/sktime/pytorch-forecasting/pull/1902
4646
* [BUG] fixed memory leak in `TimeSeriesDataset` by using `@cached_property` and clean-up of index construction by @Vishnu-Rangiah in https://github.com/sktime/pytorch-forecasting/pull/1905
4747
* [BUG] Fix issue with `plot_prediction_actual_by_variable` unsupported operand type(s) for *: 'numpy.ndarray' and 'Tensor' by @fnhirwa in https://github.com/sktime/pytorch-forecasting/pull/1903
48-
* [BUG] Correcly set lagged variables to known when lag >= horizon by @hubkrieb in https://github.com/sktime/pytorch-forecasting/pull/1910
48+
* [BUG] Correctly set lagged variables to known when lag >= horizon by @hubkrieb in https://github.com/sktime/pytorch-forecasting/pull/1910
4949
* [BUG] Updated base_model.py to account for importing error by @Himanshu-Verma-ds in https://github.com/sktime/pytorch-forecasting/pull/1488
5050
* [BUG][DOC] Fix documentation: pass loss argument to BaseModel in custom models tutorial example by @PranavBhatP in https://github.com/sktime/pytorch-forecasting/pull/1931
5151
* [BUG] fix broken version inspection if package distribution has `None` name by @lohraspco in https://github.com/sktime/pytorch-forecasting/pull/1926
5252
* [BUG] fix sporadic `tkinter` failures in CI by @fkiraly in https://github.com/sktime/pytorch-forecasting/pull/1937
53-
* [BUG] Device inconstency in `MQF2DistributionLoss` raising: RuntimeError: Expected all tensors to be on the same device by @fnhirwa in https://github.com/sktime/pytorch-forecasting/pull/1916
53+
* [BUG] Device inconsistency in `MQF2DistributionLoss` raising: RuntimeError: Expected all tensors to be on the same device by @fnhirwa in https://github.com/sktime/pytorch-forecasting/pull/1916
5454
* [BUG] fixed memory leak in BaseModel by detach some tensor by @zju-ys in https://github.com/sktime/pytorch-forecasting/pull/1924
5555
* [BUG] Fix `TimeSeriesDataSet` wrong inferred `tensor` `dtype` when `time_idx` is included in features by @cngmid in https://github.com/sktime/pytorch-forecasting/pull/1950
5656
* [BUG] standardize output format of xLSTMTime estimator for point predictions by @sanskarmodi8 in https://github.com/sktime/pytorch-forecasting/pull/1978
@@ -381,7 +381,7 @@ Maintenance update widening compatibility ranges and consolidating dependencies:
381381

382382
### Changed
383383

384-
- Dropping Python 3.6 suppport, adding 3.10 support (#479)
384+
- Dropping Python 3.6 support, adding 3.10 support (#479)
385385
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
386386
- Changed transformation format for Encoders to dict from tuple (#949)
387387

@@ -408,7 +408,7 @@ Maintenance update widening compatibility ranges and consolidating dependencies:
408408
- Allow using [torchmetrics](https://torchmetrics.readthedocs.io/) as loss metrics (#776)
409409
- Enable fitting `EncoderNormalizer()` with limited data history using `max_length` argument (#782)
410410
- More flexible `MultiEmbedding()` with convenience `output_size` and `input_size` properties (#829)
411-
- Fix concatentation of attention (#902)
411+
- Fix concatenation of attention (#902)
412412

413413
### Fixed
414414

@@ -430,7 +430,7 @@ Maintenance update widening compatibility ranges and consolidating dependencies:
430430
### Fixed
431431

432432
- Fix inattention mutation to `x_cont` (#732).
433-
- Compatability with pytorch-lightning 1.5 (#758)
433+
- Compatibility with pytorch-lightning 1.5 (#758)
434434

435435
### Contributors
436436

@@ -517,7 +517,7 @@ Maintenance update widening compatibility ranges and consolidating dependencies:
517517

518518
### Added
519519

520-
- Adding a filter functionality to the timeseries datasset (#329)
520+
- Adding a filter functionality to the timeseries dataset (#329)
521521
- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)
522522
- Allow usage of any torch optimizer such as SGD (#380)
523523

@@ -586,7 +586,7 @@ Maintenance update widening compatibility ranges and consolidating dependencies:
586586
### Added
587587

588588
- Adding support for multiple targets in the TimeSeriesDataSet (#199) and amended tutorials.
589-
- Temporal fusion transformer and DeepAR with support for multiple tagets (#199)
589+
- Temporal fusion transformer and DeepAR with support for multiple targets (#199)
590590
- Check for non-finite values in TimeSeriesDataSet and better validate scaler argument (#220)
591591
- LSTM and GRU implementations that can handle zero-length sequences (#235)
592592
- Helpers for implementing auto-regressive models (#236)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ To implement new models or other custom components, see the [How to implement ne
7575

7676
# Usage example
7777

78-
Networks can be trained with the [PyTorch Lighning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html) on [pandas Dataframes](https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe) which are first converted to a [TimeSeriesDataSet](https://pytorch-forecasting.readthedocs.io/en/latest/data.html).
78+
Networks can be trained with the [PyTorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html) on [pandas Dataframes](https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe) which are first converted to a [TimeSeriesDataSet](https://pytorch-forecasting.readthedocs.io/en/latest/data.html).
7979

8080
```python
8181
# imports for training
@@ -122,7 +122,7 @@ batch_size = 128
122122
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)
123123
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)
124124

125-
# create PyTorch Lighning Trainer with early stopping
125+
# create PyTorch Lightning Trainer with early stopping
126126
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=1, verbose=False, mode="min")
127127
lr_logger = LearningRateMonitor()
128128
trainer = pl.Trainer(

docs/source/faq.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Creating datasets
2727
200 for the decoder length. Consider that longer lengths increase the time it takes
2828
for your model to train.
2929

30-
The ratio of decoder and encoder length depends on the used alogrithm.
30+
The ratio of decoder and encoder length depends on the used algorithm.
3131
Look at :ref:`documentation <models>` to get clues.
3232

3333
* **It takes very long to create the dataset. Why is that?**
@@ -61,7 +61,7 @@ Training models
6161

6262
* **Why does the learning rate finder not finish?**
6363

64-
First, ensure that the trainer does not have the keword ``fast_dev_run=True`` and
64+
First, ensure that the trainer does not have the keyword ``fast_dev_run=True`` and
6565
``limit_train_batches=...`` set. Second, use a target normalizer in your training dataset.
6666
Third, increase the ``early_stop_threshold`` argument
6767
of the ``lr_find`` method to a large number.

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ To use the MQF2 loss (multivariate quantile loss), also execute
5252
5353
pip install pytorch-forecasting[mqf2]
5454
55-
Vist :ref:`Getting started <getting-started>` to learn more about the package and detailled installation instruction.
55+
Visit :ref:`Getting started <getting-started>` to learn more about the package and detailed installation instruction.
5656
The :ref:`Tutorials <tutorials>` section provides guidance on how to use models and implement new ones.
5757

5858
.. toctree::

docs/source/installation.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Contributing to ``pytorch-forecasting``
8181
Contributions to PyTorch Forecasting are very welcome! You do not have to be an expert in deep learning
8282
to contribute. If you find a bug - fix it! If you miss a feature - propose it!
8383

84-
To obtain an editible version ``pytorch-forecasting`` for development or contributions,
84+
To obtain an editable version ``pytorch-forecasting`` for development or contributions,
8585
you will need to set up:
8686

8787
* a local clone of the ``pytorch-forecasting`` repository.
@@ -128,7 +128,7 @@ Creating a fork and cloning the repository
128128
> upstream https://github.com/sktime/pytorch-forecasting.git (fetch)
129129
> upstream https://github.com/sktime/pytorch-forecasting.git (push)
130130
131-
Setting up an editible virtual environment
131+
Setting up an editable virtual environment
132132
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
133133

134134
1. Set up a new virtual environment. Our instructions will go through the commands to set up a ``conda`` environment which is recommended for ``pytorch-forecasting`` development.
@@ -183,13 +183,13 @@ Technical Design Principles
183183
~~~~~~~~~~~~~~~~~~~~~~~~~~~
184184

185185
When writing code for your new feature, it is recommended to follow these
186-
technical design principles to ensure compatability between the feature and the library.
186+
technical design principles to ensure compatibility between the feature and the library.
187187

188188
* Backward compatible API if possible to prevent breaking code.
189189
* Powerful abstractions to enable quick experimentation. At the same time, the abstractions should
190190
allow the user to still take full control.
191191
* Intuitive default values that do not need changing in most cases.
192-
* Focus on forecasting time-related data - specificially timeseries regression and classificiation.
192+
* Focus on forecasting time-related data - specifically timeseries regression and classification.
193193
Contributions not directly related to this topic might not be merged. We want to keep the library as
194194
crisp as possible.
195195
* Install ``pre-commit`` and have it run on every commit that you make on your feature branches.

docs/source/metrics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ predictions add up. For example:
3030
3131
Here we add to MAE an additional loss. This additional loss is the MAE calculated on the mean predictions
3232
and actuals. We can also use other metrics such as SMAPE to ensure aggregated results are unbiased in that metric.
33-
One important point to keep in mind is that this metric is calculated accross samples, i.e. it will vary depending
33+
One important point to keep in mind is that this metric is calculated across samples, i.e. it will vary depending
3434
on the batch size. In particular, errors tend to average out with increased batch sizes.
3535

3636

docs/source/models.rst

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Model parameters very much depend on the dataset for which they are destined.
99

1010
PyTorch Forecasting provides a ``.from_dataset()`` method for each model that
1111
takes a :py:class:`~data.timeseries.TimeSeriesDataSet` and additional parameters
12-
that cannot directy derived from the dataset such as, e.g. ``learning_rate`` or ``hidden_size``.
12+
that cannot directly derived from the dataset such as, e.g. ``learning_rate`` or ``hidden_size``.
1313

1414
To tune models, `optuna <https://optuna.readthedocs.io/>`_ can be used. For example, tuning of the
1515
:py:class:`~models.temporal_fusion_transformer.TemporalFusionTransformer`
@@ -47,7 +47,7 @@ Availability of covariates
4747
.. _model-covariates:
4848

4949
If you have covariates, that is variables in addition to the target variable itself that hold information
50-
about the target, then your case will benefit from a model that can accomodate covariates. A model that
50+
about the target, then your case will benefit from a model that can accommodate covariates. A model that
5151
cannot use covariates is :py:class:`~pytorch_forecasting.models.nbeats.NBeats`.
5252

5353
Length of timeseries
@@ -58,7 +58,7 @@ most models are created and tested on very long timeseries while in practice sho
5858
timeseries are often encountered. A model that can leverage covariates well such as the
5959
:py:class:`~pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer`
6060
will typically perform better than other models on short timeseries. It is a significant step
61-
from short timeseries to making cold-start predictions soley based on static covariates, i.e.
61+
from short timeseries to making cold-start predictions solely based on static covariates, i.e.
6262
making predictions without observed history. For example,
6363
this is only supported by the
6464
:py:class:`~pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer`
@@ -72,7 +72,7 @@ If your time series are related to each other (e.g. all sales of products of the
7272
a model that can learn relations between the timeseries can improve accuracy.
7373
Not that only :ref:`models that can process covariates <model-covariates>` can
7474
learn relationships between different timeseries.
75-
If the timeseries denote different entities or exhibit very similar patterns accross the board,
75+
If the timeseries denote different entities or exhibit very similar patterns across the board,
7676
a model such as :py:class:`~pytorch_forecasting.models.nbeats.NBeats` will not work as well.
7777

7878
If you have only one or very few timeseries,
@@ -86,7 +86,7 @@ Not every can do regression, classification or handle multiple targets. Some are
8686
geared towards a single task. For example, :py:class:`~pytorch_forecasting.models.nbeats.NBeats`
8787
can only be used for regression on a single target without covariates while the
8888
:py:class:`~pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer` supports
89-
multiple targets and even hetrogeneous targets where some are continuous variables and others categorical,
89+
multiple targets and even heterogeneous targets where some are continuous variables and others categorical,
9090
i.e. regression and classification at the same time. :py:class:`~pytorch_forecasting.models.deepar.DeepAR`
9191
can handle multiple targets but only works for regression tasks.
9292

@@ -97,18 +97,18 @@ Supporting uncertainty
9797
~~~~~~~~~~~~~~~~~~~~~~~
9898

9999
Not all models support uncertainty estimation. Those that do, might do so in different fashions.
100-
Non-parameteric models provide forecasts that are not bound to a given distribution
100+
Non-parametric models provide forecasts that are not bound to a given distribution
101101
while parametric models assume that the data follows a specific distribution.
102102

103103
The parametric models will be a better choice if you
104104
know how your data (and potentially error) is distributed. However, if you are missing this information or
105105
cannot make an educated guess that matches reality rather well, the model's uncertainty estimates will
106-
be adversely impacted. In this case, a non-parameteric model will do much better.
106+
be adversely impacted. In this case, a non-parametric model will do much better.
107107

108-
:py:class:`~pytorch_forecasting.models.deepar.DeepAR` is an example for a parameteric model while
108+
:py:class:`~pytorch_forecasting.models.deepar.DeepAR` is an example for a parametric model while
109109
the :py:class:`~pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer`
110110
can output quantile forecasts that can fit any distribution.
111-
Models based on normalizing flows marry the two worlds by providing a non-parameteric estimate
111+
Models based on normalizing flows marry the two worlds by providing a non-parametric estimate
112112
of a full probability distribution. PyTorch Forecasting currently does not provide
113113
support for these but
114114
`Pyro, a package for probabilistic programming <https://pyro.ai/examples/normalizing_flows_i.html>`_ does
@@ -120,7 +120,7 @@ Computational requirements
120120
Some models have simpler architectures and less parameters than others which can
121121
lead to significantly different training times. However, this not a general rule as demonstrated
122122
by Zhuohan et al. in `Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
123-
<https://arxiv.org/abs/2002.11794>`_. Because the data for a sample for timeseries models is often far samller than it
123+
<https://arxiv.org/abs/2002.11794>`_. Because the data for a sample for timeseries models is often far smaller than it
124124
is for computer vision or language tasks, GPUs are often underused and increasing the width of models can be an effective way
125125
to fully use a GPU. This can increase the speed of training while also improving accuracy.
126126
The other path to pushing utilization of a GPU up is increasing the batch size.

0 commit comments

Comments
 (0)