Skip to content

Commit fdb796c

Browse files
authored
added logit page (#7973)
* added logit page * minor fixes ---------
1 parent 7c43a30 commit fdb796c

File tree

1 file changed

+85
-0
lines changed
  • content/pytorch/concepts/tensor-operations/terms/logit

1 file changed

+85
-0
lines changed
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
Title: '.logit()'
3+
Description: 'Returns the logit of each element in the input tensor.'
4+
Subjects:
5+
- 'Code Foundations'
6+
- 'Computer Science'
7+
- 'Data Science'
8+
Tags:
9+
- 'Elements'
10+
- 'Methods'
11+
- 'PyTorch'
12+
- 'Tensor'
13+
CatalogContent:
14+
- 'learn-python-3'
15+
- 'paths/data-science'
16+
---
17+
18+
The **`torch.logit()`** function computes the logit (log-odds) of each element in the input [tensor](https://www.codecademy.com/resources/docs/pytorch/tensors). The logit function is the inverse of the logistic sigmoid function, defined as:
19+
20+
$$\text{logit}(x) = \log\left(\frac{x}{1 - x}\right)$$
21+
22+
This operation is widely used in statistics and machine learning, particularly in logistic regression and neural network transformations. This function is an alias for `torch.special.logit()`.
23+
24+
## Syntax
25+
26+
```pseudo
27+
torch.logit(input, eps=None, *, out=None)
28+
```
29+
30+
**Parameters:**
31+
32+
- `input` (Tensor): The input tensor, where each element should be in the range `(0, 1)` when `eps` is not provided.
33+
- `eps` (float, optional): A small value added for numerical stability. Values less than `eps` are clamped to `eps`, and values greater than `1 - eps` are clamped to `1 - eps`.
34+
- `out` (Tensor, optional): The output tensor to store the result.
35+
36+
**Return value:**
37+
38+
Returns a tensor containing the logit transformation of the input values.
39+
40+
## Example 1
41+
42+
In this example, probabilities are converted into logits and then passed through a sigmoid function to verify the inverse relationship:
43+
44+
```py
45+
import torch
46+
47+
probs = torch.tensor([0.2, 0.5, 0.8])
48+
logits = torch.logit(probs)
49+
recovered = torch.sigmoid(logits)
50+
51+
print("probs:", probs)
52+
print("logits:", logits)
53+
print("sigmoid(logits):", recovered)
54+
```
55+
56+
Expected output (values may vary slightly due to precision):
57+
58+
```shell
59+
probs: tensor([0.2000, 0.5000, 0.8000])
60+
logits: tensor([-1.3863, 0.0000, 1.3863])
61+
sigmoid(logits): tensor([0.2000, 0.5000, 0.8000])
62+
```
63+
64+
## Example 2
65+
66+
In this example, the `eps` parameter is used to prevent infinities when the input contains 0 or 1:
67+
68+
```py
69+
import torch
70+
71+
x = torch.tensor([0.0, 1.0])
72+
73+
# Without eps: produces -inf and +inf
74+
print(torch.logit(x, eps=None))
75+
76+
# With eps: clamps input to [eps, 1 - eps] before applying logit
77+
print(torch.logit(x, eps=1e-6))
78+
```
79+
80+
The output of this code is:
81+
82+
```shell
83+
tensor([-inf, inf])
84+
tensor([-13.8155, 13.8023])
85+
```

0 commit comments

Comments
 (0)