finetuner.tuner.pytorch package#

Submodules#

Module contents#

class finetuner.tuner.pytorch.CollateAll(content_collate_fn)[source]#

Bases: object

class finetuner.tuner.pytorch.PytorchTuner(embed_model=None, loss='SiameseLoss', configure_optimizer=None, learning_rate=0.001, scheduler_step='batch', callbacks=None, device='cpu', **kwargs)[source]#

Bases: finetuner.tuner.base.BaseTuner[torch.nn.Module, torch.utils.data.dataloader.DataLoader, torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler._LRScheduler]

Create the tuner instance.

Parameters
  • embed_model (Optional[~AnyDNN]) – Model that produces embeddings from inputs.

  • loss (Union[BaseLoss, str]) – Either the loss object instance, or the name of the loss function. Currently available losses are SiameseLoss and TripletLoss.

  • configure_optimizer (Optional[Callable[[~AnyDNN], Union[~AnyOptimizer, Tuple[~AnyOptimizer, ~AnyScheduler]]]]) –

    A function that allows you to provide a custom optimizer and learning rate. The function should take one input - the embedding model, and return either just an optimizer or a tuple of an optimizer and a learning rate scheduler.

    For Keras, you should provide the learning rate scheduler directly to the optimizer using the learning_rate argument in its __init__ function - and this should be an instance of a subclass of tf.keras.optimizer.schedulers.LearningRateScheduler - and not an instance of the callback (tf.keras.callbacks.LearningRateScheduler).

  • learning_rate (float) – Learning rate for the default optimizer. If you provide a custom optimizer, this learning rate will not apply.

  • scheduler_step (str) –

    At which interval should the learning rate sheduler’s step function be called. Valid options are “batch” and “epoch”.

    For Keras, this option has no effect, as LearningRateScheduler instances are called by the optimizer on each step automatically.

  • callbacks (Optional[List[BaseCallback]]) – A list of callbacks. The progress bar callback will be pre-prended to this list.

  • device (str) – The device to which to move the model. Supported options are "cpu" and "cuda" (for GPU).

save(*args, **kwargs)[source]#

Save the embedding model.

You need to pass the path where to save the model in either args or kwargs (for f key).

Parameters
  • args – Arguments to pass to torch.save function.

  • kwargs – Keyword arguments to pass to torch.save function.

state: finetuner.tuner.state.TunerState#