> 文章列表 > nn.Linear(),全连接层:将输入值做线性变换

nn.Linear(),全连接层:将输入值做线性变换

nn.Linear(),全连接层:将输入值做线性变换

官网介绍:
Init signature:
nn.Linear(
in_features: int,
out_features: int,
bias: bool = True,
device=None,
dtype=None,
) -> None
Docstring:
Applies a linear transformation to the incoming data: :math:y = xA^T + b

This module supports :ref:TensorFloat32<tf32_on_ampere>.

Args:
in_features: 每个输入样本尺寸
out_features: 每个输出样本的尺寸
bias: If set to False, the layer will not learn an additive bias.
Default: True

Shape:
- Input: :math:(*, H_{in}) where :math:* means any number of
dimensions including none and :math:H_{in} = \\text{in\\_features}.
- Output: :math:(*, H_{out}) where all but the last dimension
are the same shape as the input and :math:H_{out} = \\text{out\\_features}.

Attributes:
weight: the learnable weights of the module of shape
:math:(\\text{out\\_features}, \\text{in\\_features}). The values are
initialized from :math:\\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where
:math:k = \\frac{1}{\\text{in\\_features}}
bias: the learnable bias of the module of shape :math:(\\text{out\\_features}).
If :attr:bias is True, the values are initialized from
:math:\\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where
:math:k = \\frac{1}{\\text{in\\_features}}

Examples::

>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])

Init docstring: Initializes internal Module state, shared by both nn.Module and ScriptModule.
File: c:\\users\\administrator\\appdata\\roaming\\python\\python37\\site-packages\\torch\\nn\\modules\\linear.py
Type: type
Subclasses: NonDynamicallyQuantizableLinear, LazyLinear, Linear, Linear

例子:

x=torch.randn([10,3])输出:tensor([[-0.2022, -1.0258, -0.0116],[ 0.4581, -1.4392,  0.7463],[ 0.4723,  0.7842,  2.1767],[-1.6525, -0.1205, -1.7498],[-0.9119, -0.1080,  0.4499],[-0.2130,  0.5349, -0.5764],[ 0.8852, -0.2906,  0.4138],[ 0.4349,  0.1988,  0.5386],[ 1.2275,  0.3119, -0.7539],[-0.3409,  0.3802, -0.6528]])

为何网