Skip to content

Commit 4dc2ba8

Browse files
tonybove-appleabove3
andauthored
Docs - Format Docstrings for and Other 7.1 Changes (#2044)
Co-authored-by: above3 <anthony_bove@apple.com>
1 parent 6172f0f commit 4dc2ba8

File tree

3 files changed

+30
-28
lines changed

3 files changed

+30
-28
lines changed

coremltools/converters/_converters_entry.py

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,8 @@ def convert(
112112
- Path to a ``.pt`` file
113113
114114
- Torch Exported Models:
115-
- A `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram> ` object with `EDGE` dialect
115+
- An `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_
116+
object with ``EDGE`` dialect.
116117
117118
source : str (optional)
118119
@@ -181,12 +182,13 @@ def convert(
181182
``ImageType``, the converted Core ML model will have inputs with
182183
the same name.
183184
- If ``dtype`` is missing:
184-
* For ``minimum_deployment_target <= ct.target.macOS12``, it defaults to float 32.
185-
* For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
186-
It defaults to float 16.
185+
* For ``minimum_deployment_target <= ct.target.macOS12``, it defaults to float 32.
186+
* For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
187+
It defaults to float 16.
187188
188189
- Torch Exported Models:
189-
- The ``inputs`` parameter is not supported. ``inputs`` parameter is inferred from Torch ExportedProgram.
190+
- The ``inputs`` parameter is not supported. The ``inputs`` parameter is
191+
inferred from the Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
190192
191193
outputs : list of ``TensorType`` or ``ImageType`` (optional)
192194
@@ -230,20 +232,19 @@ def convert(
230232
If ``dtype`` not specified, the outputs inferred of type float 32
231233
defaults to float 16.
232234
233-
* PyTorch:
234-
235-
- TorchScript Models:
236-
- If specified, the length of the list must match the number of
237-
outputs returned by the PyTorch model.
238-
- If ``name`` is specified, it is applied to the output names of the
239-
converted Core ML model.
240-
- For ``minimum_deployment_target >= ct.target.macOS13``, and with ``compute_precision`` in float 16 precision.
241-
If ``dtype`` not specified, the outputs inferred of type float 32
242-
defaults to float 16.
243-
244-
- Torch Exported Models:
245-
- The ``outputs`` parameter is not supported. ``outputs`` parameter is inferred from Torch ExportedProgram.
235+
* PyTorch: TorchScript Models
236+
- If specified, the length of the list must match the number of
237+
outputs returned by the PyTorch model.
238+
- If ``name`` is specified, it is applied to the output names of the
239+
converted Core ML model.
240+
- For ``minimum_deployment_target >= ct.target.macOS13``,
241+
and with ``compute_precision`` in float 16 precision.
242+
- If ``dtype`` not specified, the outputs inferred of type float 32
243+
defaults to float 16.
246244
245+
* PyTorch: Torch Exported Models:
246+
- The ``outputs`` parameter is not supported.
247+
The ``outputs`` parameter is inferred from Torch `ExportedProgram <https://pytorch.org/docs/stable/export.html#torch.export.ExportedProgram>`_.
247248
248249
classifier_config : ClassifierConfig class (optional)
249250
The configuration if the MLModel is intended to be a classifier.

coremltools/converters/mil/mil/passes/defs/optimize_tensor_operation.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,10 +100,10 @@ def _try_to_transform(op, block):
100100
class expand_high_rank_reshape_and_transpose(AbstractGraphPass):
101101
"""
102102
Detect the pattern ``reshape_1-->transpose-->reshape_2``, where ``reshape_1`` has
103-
a output tensor with rank >= 6, and the reshape_2 produces a tensor with rank <= 5.
103+
an output tensor with ``rank >= 6``, and ``reshape_2`` produces a tensor with ``rank <= 5``.
104104
105105
In general, we can expand this pattern into a sequence of rank 4 ``reshape`` and ``transpose`` ops,
106-
which is supported by Core ML runtime.
106+
which is supported by the Core ML runtime.
107107
108108
.. code-block::
109109

coremltools/models/utils.py

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1065,23 +1065,24 @@ def make_pipeline(
10651065
10661066
Parameters
10671067
----------
1068-
*models
1068+
*models :
10691069
Two or more instances of ``ct.models.MLModel``.
10701070
1071-
compute_units: ``None`` or ``coremltools.ComputeUnit``
1071+
compute_units :
10721072
The set of processing units that all models in the pipeline can use to make predictions.
1073+
Can be ``None`` or ``coremltools.ComputeUnit``.
10731074
1074-
If None, the ``compute_unit`` will be infered from the ``compute_units`` values of the models.
1075-
If all models do not have the same ``compute_units`` values, this parameter must be specified.
1075+
* If ``None``, the ``compute_unit`` will be infered from the ``compute_unit`` values of the models.
1076+
If all models do not have the same ``compute_unit`` values, this parameter must be specified.
10761077
1077-
``coremltools.ComputeUnit`` is an enum with four possible values:
1078+
* ``coremltools.ComputeUnit`` is an enum with four possible values:
10781079
- ``coremltools.ComputeUnit.ALL``: Use all compute units available, including the
1079-
neural engine.
1080+
neural engine.
10801081
- ``coremltools.ComputeUnit.CPU_ONLY``: Limit the model to only use the CPU.
10811082
- ``coremltools.ComputeUnit.CPU_AND_GPU``: Use both the CPU and GPU,
1082-
but not the neural engine.
1083+
but not the neural engine.
10831084
- ``coremltools.ComputeUnit.CPU_AND_NE``: Use both the CPU and neural engine, but
1084-
not the GPU. Available only for macOS >= 13.0.
1085+
not the GPU. Available only for macOS >= 13.0.
10851086
10861087
Returns
10871088
-------

0 commit comments

Comments
 (0)