Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
0963309
chore: update docs/dyn/index.md
yoshi-automation Mar 17, 2026
5577adf
feat(admin): update the api
yoshi-automation Mar 17, 2026
f4eee62
feat(aiplatform): update the api
yoshi-automation Mar 17, 2026
0ae6b5e
feat(analyticshub): update the api
yoshi-automation Mar 17, 2026
3bcae30
feat(androidpublisher): update the api
yoshi-automation Mar 17, 2026
d585383
feat(apihub): update the api
yoshi-automation Mar 17, 2026
df7e53e
feat(backupdr): update the api
yoshi-automation Mar 17, 2026
a2ecc21
feat(bigquery): update the api
yoshi-automation Mar 17, 2026
3132d92
feat(bigqueryreservation): update the api
yoshi-automation Mar 17, 2026
2dc2ec0
feat(ces): update the api
yoshi-automation Mar 17, 2026
cffc11a
feat(classroom): update the api
yoshi-automation Mar 17, 2026
a4d5055
feat(cloudbuild): update the api
yoshi-automation Mar 17, 2026
5eefc0e
feat(cloudidentity): update the api
yoshi-automation Mar 17, 2026
166ea81
feat(compute): update the api
yoshi-automation Mar 17, 2026
ee644fc
feat(contactcenterinsights): update the api
yoshi-automation Mar 17, 2026
e7ad741
feat(css): update the api
yoshi-automation Mar 17, 2026
8abec73
feat(datalineage): update the api
yoshi-automation Mar 17, 2026
3834f26
feat(dataplex): update the api
yoshi-automation Mar 17, 2026
e51e83a
feat(discoveryengine): update the api
yoshi-automation Mar 17, 2026
6346814
feat(displayvideo): update the api
yoshi-automation Mar 17, 2026
b538e2e
feat(drive): update the api
yoshi-automation Mar 17, 2026
242ee76
feat(fcm): update the api
yoshi-automation Mar 17, 2026
44b794c
feat(iam): update the api
yoshi-automation Mar 17, 2026
54b06e5
feat(migrationcenter): update the api
yoshi-automation Mar 17, 2026
8371670
feat(networkconnectivity): update the api
yoshi-automation Mar 17, 2026
3317e69
feat(redis): update the api
yoshi-automation Mar 17, 2026
afe9f8a
feat(run): update the api
yoshi-automation Mar 17, 2026
0fb2f3b
feat(searchads360): update the api
yoshi-automation Mar 17, 2026
fb1d3d2
fix(sts): update the api
yoshi-automation Mar 17, 2026
565e85e
feat(workloadmanager): update the api
yoshi-automation Mar 17, 2026
3b6928a
chore(docs): Add new discovery artifacts and artifacts with minor upd…
yoshi-automation Mar 17, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

Large diffs are not rendered by default.

20 changes: 19 additions & 1 deletion docs/dyn/admin_reports_v1.activities.html
Original file line number Diff line number Diff line change
Expand Up @@ -320,7 +320,25 @@ <h3>Method Details</h3>
&quot;title&quot;: &quot;A String&quot;, # Title of the label
},
],
&quot;id&quot;: &quot;A String&quot;, # Identifier of the resource.
&quot;id&quot;: &quot;A String&quot;, # Identifier of the resource, such as a doc_id for a Drive document, a conference_id for a Meet conference, or a &quot;gaia_id/rfc2822_message_id&quot; for an email.
&quot;ownerDetails&quot;: { # Details of the owner of the resource. # Owner details of the resource.
&quot;ownerIdentity&quot;: [ # Identity details of the owner(s) of the resource.
{ # Identity details of the owner of the resource.
&quot;customerIdentity&quot;: { # Identity of the Google Workspace customer who owns the resource. # Identity of the Google Workspace customer who owns the resource.
&quot;id&quot;: &quot;A String&quot;, # Customer id.
},
&quot;groupIdentity&quot;: { # Identity of the group who owns the resource. # Identity of the group who owns the resource.
&quot;groupEmail&quot;: &quot;A String&quot;, # Group email.
&quot;id&quot;: &quot;A String&quot;, # Group gaia id.
},
&quot;userIdentity&quot;: { # Identity of the user who owns the resource. # Identity of the user who owns the resource.
&quot;id&quot;: &quot;A String&quot;, # User gaia id.
&quot;userEmail&quot;: &quot;A String&quot;, # User email.
},
},
],
&quot;ownerType&quot;: &quot;A String&quot;, # Type of the owner of the resource.
},
&quot;relation&quot;: &quot;A String&quot;, # Defines relationship of the resource to the events
&quot;title&quot;: &quot;A String&quot;, # Title of the resource. For instance, in case of a drive document, this would be the title of the document. In case of an email, this would be the subject.
&quot;type&quot;: &quot;A String&quot;, # Type of the resource - document, email, chat message
Expand Down
140 changes: 70 additions & 70 deletions docs/dyn/aiplatform_v1.endpoints.html

Large diffs are not rendered by default.

204 changes: 102 additions & 102 deletions docs/dyn/aiplatform_v1.projects.locations.cachedContents.html

Large diffs are not rendered by default.

140 changes: 70 additions & 70 deletions docs/dyn/aiplatform_v1.projects.locations.endpoints.html

Large diffs are not rendered by default.

112 changes: 56 additions & 56 deletions docs/dyn/aiplatform_v1.projects.locations.evaluationItems.html

Large diffs are not rendered by default.

28 changes: 24 additions & 4 deletions docs/dyn/aiplatform_v1.projects.locations.evaluationRuns.html
Original file line number Diff line number Diff line change
Expand Up @@ -281,6 +281,11 @@ <h3>Method Details</h3>
},
&quot;sampleCount&quot;: 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
&quot;datasetCustomMetrics&quot;: [ # Optional. Specifications for custom dataset-level aggregations.
{ # Defines a custom dataset-level aggregation.
&quot;displayName&quot;: &quot;A String&quot;, # Optional. A display name for this custom summary metric. Used to prefix keys in the output summaryMetrics map. If not provided, a default name like &quot;dataset_custom_metric_1&quot;, &quot;dataset_custom_metric_2&quot;, etc., will be generated based on the order in the repeated field.
},
],
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
{ # The metric used for evaluation runs.
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
Expand Down Expand Up @@ -1389,7 +1394,7 @@ <h3>Method Details</h3>
&quot;topK&quot;: 3.14, # Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a `top_k` of 40 means the model will choose the next word from the 40 most likely words.
&quot;topP&quot;: 3.14, # Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least `top_p`. This helps generate more diverse and less repetitive responses. For example, a `top_p` of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It&#x27;s recommended to adjust either temperature or `top_p`, but not both.
},
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Anthropic and Llama third-party models are also supported through Model Garden. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Third-party model format: `projects/{project}/locations/{location}/publishers/anthropic/models/{model}` `projects/{project}/locations/{location}/publishers/llama/models/{model}` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
},
},
&quot;labels&quot;: { # Optional. Labels for the evaluation run.
Expand Down Expand Up @@ -1555,6 +1560,11 @@ <h3>Method Details</h3>
},
&quot;sampleCount&quot;: 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
&quot;datasetCustomMetrics&quot;: [ # Optional. Specifications for custom dataset-level aggregations.
{ # Defines a custom dataset-level aggregation.
&quot;displayName&quot;: &quot;A String&quot;, # Optional. A display name for this custom summary metric. Used to prefix keys in the output summaryMetrics map. If not provided, a default name like &quot;dataset_custom_metric_1&quot;, &quot;dataset_custom_metric_2&quot;, etc., will be generated based on the order in the repeated field.
},
],
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
{ # The metric used for evaluation runs.
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
Expand Down Expand Up @@ -2663,7 +2673,7 @@ <h3>Method Details</h3>
&quot;topK&quot;: 3.14, # Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a `top_k` of 40 means the model will choose the next word from the 40 most likely words.
&quot;topP&quot;: 3.14, # Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least `top_p`. This helps generate more diverse and less repetitive responses. For example, a `top_p` of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It&#x27;s recommended to adjust either temperature or `top_p`, but not both.
},
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Anthropic and Llama third-party models are also supported through Model Garden. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Third-party model format: `projects/{project}/locations/{location}/publishers/anthropic/models/{model}` `projects/{project}/locations/{location}/publishers/llama/models/{model}` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
},
},
&quot;labels&quot;: { # Optional. Labels for the evaluation run.
Expand Down Expand Up @@ -2871,6 +2881,11 @@ <h3>Method Details</h3>
},
&quot;sampleCount&quot;: 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
&quot;datasetCustomMetrics&quot;: [ # Optional. Specifications for custom dataset-level aggregations.
{ # Defines a custom dataset-level aggregation.
&quot;displayName&quot;: &quot;A String&quot;, # Optional. A display name for this custom summary metric. Used to prefix keys in the output summaryMetrics map. If not provided, a default name like &quot;dataset_custom_metric_1&quot;, &quot;dataset_custom_metric_2&quot;, etc., will be generated based on the order in the repeated field.
},
],
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
{ # The metric used for evaluation runs.
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
Expand Down Expand Up @@ -3979,7 +3994,7 @@ <h3>Method Details</h3>
&quot;topK&quot;: 3.14, # Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a `top_k` of 40 means the model will choose the next word from the 40 most likely words.
&quot;topP&quot;: 3.14, # Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least `top_p`. This helps generate more diverse and less repetitive responses. For example, a `top_p` of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It&#x27;s recommended to adjust either temperature or `top_p`, but not both.
},
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Anthropic and Llama third-party models are also supported through Model Garden. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Third-party model format: `projects/{project}/locations/{location}/publishers/anthropic/models/{model}` `projects/{project}/locations/{location}/publishers/llama/models/{model}` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
},
},
&quot;labels&quot;: { # Optional. Labels for the evaluation run.
Expand Down Expand Up @@ -4158,6 +4173,11 @@ <h3>Method Details</h3>
},
&quot;sampleCount&quot;: 42, # Optional. Number of samples for each instance in the dataset. If not specified, the default is 4. Minimum value is 1, maximum value is 32.
},
&quot;datasetCustomMetrics&quot;: [ # Optional. Specifications for custom dataset-level aggregations.
{ # Defines a custom dataset-level aggregation.
&quot;displayName&quot;: &quot;A String&quot;, # Optional. A display name for this custom summary metric. Used to prefix keys in the output summaryMetrics map. If not provided, a default name like &quot;dataset_custom_metric_1&quot;, &quot;dataset_custom_metric_2&quot;, etc., will be generated based on the order in the repeated field.
},
],
&quot;metrics&quot;: [ # Required. The metrics to be calculated in the evaluation run.
{ # The metric used for evaluation runs.
&quot;computationBasedMetricSpec&quot;: { # Specification for a computation based metric. # Spec for a computation based metric.
Expand Down Expand Up @@ -5266,7 +5286,7 @@ <h3>Method Details</h3>
&quot;topK&quot;: 3.14, # Optional. Specifies the top-k sampling threshold. The model considers only the top k most probable tokens for the next token. This can be useful for generating more coherent and less random text. For example, a `top_k` of 40 means the model will choose the next word from the 40 most likely words.
&quot;topP&quot;: 3.14, # Optional. Specifies the nucleus sampling threshold. The model considers only the smallest set of tokens whose cumulative probability is at least `top_p`. This helps generate more diverse and less repetitive responses. For example, a `top_p` of 0.9 means the model considers tokens until the cumulative probability of the tokens to select from reaches 0.9. It&#x27;s recommended to adjust either temperature or `top_p`, but not both.
},
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
&quot;model&quot;: &quot;A String&quot;, # Optional. The fully qualified name of the publisher model or endpoint to use. Anthropic and Llama third-party models are also supported through Model Garden. Publisher model format: `projects/{project}/locations/{location}/publishers/*/models/*` Third-party model format: `projects/{project}/locations/{location}/publishers/anthropic/models/{model}` `projects/{project}/locations/{location}/publishers/llama/models/{model}` Endpoint format: `projects/{project}/locations/{location}/endpoints/{endpoint}`
},
},
&quot;labels&quot;: { # Optional. Labels for the evaluation run.
Expand Down
Loading
Loading