nntoolbox.learner module¶
-
class
nntoolbox.learner.
DistillationLearner
(train_data: torch.utils.data.dataloader.DataLoader, val_data: torch.utils.data.dataloader.DataLoader, model: torch.nn.modules.module.Module, teacher: torch.nn.modules.module.Module, criterion: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, temperature: float, teacher_weight: float, hard_label_weight: float, device=device(type='cpu'))[source]¶ Bases:
nntoolbox.learner.SupervisedLearner
Distilling Knowledge from a big teacher network to a smaller model (UNTESTED)
References:
Geoffrey Hinton, Oriol Vinyals, Jeff Dean. “Distilling the Knowledge in a Neural Network.” https://arxiv.org/abs/1503.02531
TTIC Distinguished Lecture Series - Geoffrey Hinton. https://www.youtube.com/watch?v=EK61htlw8hY
-
class
nntoolbox.learner.
Learner
(train_data: torch.utils.data.dataloader.DataLoader, val_data: torch.utils.data.dataloader.DataLoader, model: torch.nn.modules.module.Module, criterion: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer)[source]¶ Bases:
object
-
class
nntoolbox.learner.
SupervisedLearner
(train_data: torch.utils.data.dataloader.DataLoader, val_data: torch.utils.data.dataloader.DataLoader, model: torch.nn.modules.module.Module, criterion: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, device=device(type='cpu'), mixup: bool = False, mixup_alpha: float = 0.4)[source]¶ Bases:
nntoolbox.learner.Learner
-
learn
(n_epoch: int, callbacks: Optional[Iterable[nntoolbox.callbacks.callbacks.Callback]] = None, metrics: Optional[Dict[str, nntoolbox.metrics.metrics.Metric]] = None, final_metric: str = 'accuracy', load_path=None) → float[source]¶
-