nntoolbox.vision.components.layers module¶
-
class
nntoolbox.vision.components.layers.BiasLayer2D(out_channels: int, init: float = 0.0)[source]¶ Bases:
torch.nn.modules.module.ModuleAdd a trainable bias vector to input:
y = x + bias
-
forward(input: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.ConvolutionalLayer(in_channels, out_channels, kernel_size=3, stride=1, padding=0, bias=False, activation=<class 'torch.nn.modules.activation.ReLU'>, normalization=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]¶ Bases:
torch.nn.modules.container.SequentialSimple convolutional layer: input -> conv2d -> activation -> norm 2d
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.CoordConv2D(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')[source]¶ Bases:
torch.nn.modules.conv.Conv2dImplement CoordConv https://arxiv.org/pdf/1807.03247.pdf
-
static
augment_input(input)[source]¶ Add two coordinate channels to input :param input: (N, C, H, W) :return: (N, C + 2, H, W)
-
bias: Optional[torch.Tensor]¶
-
dilation: Tuple[int, …]¶
-
forward(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
groups: int¶
-
kernel_size: Tuple[int, …]¶
-
out_channels: int¶
-
output_padding: Tuple[int, …]¶
-
padding: Tuple[int, …]¶
-
padding_mode: str¶
-
stride: Tuple[int, …]¶
-
transposed: bool¶
-
weight: torch.Tensor¶
-
static
-
class
nntoolbox.vision.components.layers.CoordConvolutionalLayer(in_channels, out_channels, kernel_size=3, stride=1, padding=0, bias=False, activation=<class 'torch.nn.modules.activation.ReLU'>)[source]¶ Bases:
torch.nn.modules.container.SequentialSimple convolutional layer: input -> conv2d -> activation -> batch norm 2d
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.Flatten[source]¶ Bases:
torch.nn.modules.module.Module-
forward(input)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.HighwayConvolutionalLayer(in_channels, main)[source]¶ Bases:
nntoolbox.components.components.HighwayLayerHighway layer (for images):
y = T(x) * H(x) + (1 - T(x)) * x
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.InputNormalization(mean, std)[source]¶ Bases:
torch.nn.modules.module.ModuleNormalize input before feed into a network Adapt from https://pytorch.org/tutorials/advanced/neural_style_tutorial.html
-
forward(img)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training: bool¶
-
-
class
nntoolbox.vision.components.layers.Reshape[source]¶ Bases:
torch.nn.modules.module.Module-
forward(input, new_shape)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training: bool¶
-