finetuner.tuner.base module#

class finetuner.tuner.base.BaseLoss(*args, **kwds)[source]#

Bases: Generic[finetuner.helper.AnyTensor]

distance: str#
miner: Optional[finetuner.tuner.miner.base.BaseMiner]#

Base loss class.

The subclasses should, in addition to implementing the abstract methods defined here, also implement the framework-specific “forward” method, where they need to first use the miner to mine indices, and then output the loss by running compute on embeddings and outputs of the miner.

abstract compute(embeddings, indices)[source]#

Compute the loss using embeddings and indices that the miner outputs

Return type

~AnyTensor

abstract get_default_miner(is_session_dataset)[source]#

Get the default miner for this loss, given the dataset type

Return type

BaseMiner

class finetuner.tuner.base.BaseTuner(embed_model=None, loss='SiameseLoss', configure_optimizer=None, learning_rate=0.001, scheduler_step='batch', callbacks=None, device='cpu', **kwargs)[source]#

Bases: abc.ABC, Generic[finetuner.helper.AnyDNN, finetuner.helper.AnyDataLoader, finetuner.helper.AnyOptimizer, finetuner.helper.AnyScheduler]

Create the tuner instance.

Parameters
  • embed_model (Optional[~AnyDNN]) – Model that produces embeddings from inputs.

  • loss (Union[BaseLoss, str]) – Either the loss object instance, or the name of the loss function. Currently available losses are SiameseLoss and TripletLoss.

  • configure_optimizer (Optional[Callable[[~AnyDNN], Union[~AnyOptimizer, Tuple[~AnyOptimizer, ~AnyScheduler]]]]) –

    A function that allows you to provide a custom optimizer and learning rate. The function should take one input - the embedding model, and return either just an optimizer or a tuple of an optimizer and a learning rate scheduler.

    For Keras, you should provide the learning rate scheduler directly to the optimizer using the learning_rate argument in its __init__ function - and this should be an instance of a subclass of tf.keras.optimizer.schedulers.LearningRateScheduler - and not an instance of the callback (tf.keras.callbacks.LearningRateScheduler).

  • learning_rate (float) – Learning rate for the default optimizer. If you provide a custom optimizer, this learning rate will not apply.

  • scheduler_step (str) –

    At which interval should the learning rate sheduler’s step function be called. Valid options are “batch” and “epoch”.

    For Keras, this option has no effect, as LearningRateScheduler instances are called by the optimizer on each step automatically.

  • callbacks (Optional[List[BaseCallback]]) – A list of callbacks. The progress bar callback will be pre-prended to this list.

  • device (str) – The device to which to move the model. Supported options are "cpu" and "cuda" (for GPU).

state: finetuner.tuner.state.TunerState#
property embed_model: finetuner.helper.AnyDNN#

Get the base model of this object.

Return type

~AnyDNN

fit(train_data, eval_data=None, preprocess_fn=None, collate_fn=None, epochs=10, batch_size=256, num_items_per_class=None, num_workers=0, **kwargs)[source]#

Finetune the model on the training data.

Parameters
  • train_data (DocumentArray) – Data on which to train the model.

  • eval_data (Optional[ForwardRef]) – Data on which the validation loss is computed.

  • preprocess_fn (Optional[ForwardRef]) – A pre-processing function, to apply pre-processing to documents on the fly. It should take as input the document in the dataset, and output whatever content the framework-specific dataloader (and model) would accept.

  • collate_fn (Optional[ForwardRef]) – The collation function to merge the content of individual items into a batch. Should accept a list with the content of each item, and output a tensor (or a list/dict of tensors) that feed directly into the embedding model.

  • epochs (int) – Number of epochs to train the model.

  • batch_size (int) – The batch size to use for training and evaluation.

  • num_items_per_class (Optional[int]) – Number of items from a single class to include in the batch. Only relevant for class datasets.

  • num_workers (int) – Number of workers used for loading the data. This works only with Pytorch and Paddle Paddle, and has no effect when using a Keras model.

abstract save(*args, **kwargs)[source]#

Save the weights of the embed_model.