nntoolbox.vision.components.local module

Locally Connected Layer and Subsampling layer for 2D input

class nntoolbox.vision.components.local.CondConv2d(num_experts: int, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')[source]

Bases: torch.nn.modules.conv.Conv2d

Conditionally Parameterized Convolution Layer.

References:

Brandon Yang, Gabriel Bender, Quoc V. Le, Jiquan Ngiam. “CondConv: Conditionally Parameterized Convolutions for Efficient Inference.” https://arxiv.org/abs/1904.04971

Pytorch implementation of Conv2d

bias: Optional[torch.Tensor]
branched_forward(input: torch.Tensor) → torch.Tensor[source]
dilation: Tuple[int, ]
efficient_forward(input: torch.Tensor) → torch.Tensor[source]
forward(input: torch.Tensor) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

groups: int
kernel_size: Tuple[int, ]
out_channels: int
output_padding: Tuple[int, ]
padding: Tuple[int, ]
padding_mode: str
stride: Tuple[int, ]
transposed: bool
weight: torch.Tensor
class nntoolbox.vision.components.local.LocallyConnected2D(in_channels: int, out_channels: int, in_h: int, in_w: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')[source]

Bases: torch.nn.modules.module.Module

Works similarly to Conv2d, but does not share weight. Much more memory intensive, and slower (due to suboptimal native pytorch implementation) (UNTESTED)

Example usages:

Yaniv Taigman et al. “DeepFace: Closing the Gap to Human-Level Performance in Face Verification” https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf

compute_output_shape(height: int, width: int) → Tuple[int, int][source]
forward(input: torch.Tensor) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reset_parameters()[source]
training: bool
class nntoolbox.vision.components.local.Subsampling2D(in_channels: int, kernel_size: Union[int, Tuple[int, int]] = 2, stride: Union[int, Tuple[int, int]] = 2, padding: Union[int, Tuple[int, int]] = 0, bias: bool = True, trainable: bool = True, ceil_mode: bool = False, count_include_pad: bool = True)[source]

Bases: torch.nn.modules.pooling.AvgPool2d

For each feature map of input, subsample one patch at the time, sum the values and then perform a linear transformation. Use in LeNet. (UNTESTED)

References:

Yann Lecun et al. “Gradient-Based Learning Applied to Document Recognition.” http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

ceil_mode: bool
count_include_pad: bool
forward(input: torch.Tensor) → torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

kernel_size: Union[int, Tuple[int, int]]
padding: Union[int, Tuple[int, int]]
stride: Union[int, Tuple[int, int]]