nntoolbox.components.activation module¶
-
class
nntoolbox.components.activation.LWTA(block_size)[source]¶ Bases:
torch.nn.modules.module.ModuleLocal Winner-Take-All Layer
For every k consecutive units, keep only the one with highest activations and zero-out the rest.
- References:
Rupesh Kumar Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, Jürgen Schmidhuber. “Compete to Compute.” https://papers.nips.cc/paper/5059-compete-to-compute.pdf
-
forward(input: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
training: bool¶
-
class
nntoolbox.components.activation.ZeroCenterRelu(inplace: bool = False)[source]¶ Bases:
torch.nn.modules.activation.ReLUAs described by Jeremy of FastAI
-
forward(input: torch.Tensor) → torch.Tensor[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
inplace: bool¶
-