finetuner.tuner.paddle package#
Submodules#
Module contents#
- class finetuner.tuner.paddle.PaddleTuner(embed_model=None, loss='SiameseLoss', configure_optimizer=None, learning_rate=0.001, scheduler_step='batch', callbacks=None, device='cpu', **kwargs)[source]#
Bases:
finetuner.tuner.base.BaseTuner
[paddle.nn.Layer
,paddle.io.DataLoader
,paddle.optimizer.Optimizer
,paddle.optimizer.lr.LRScheduler
]Create the tuner instance.
- Parameters
embed_model (
Optional
[~AnyDNN]) – Model that produces embeddings from inputs.loss (
Union
[BaseLoss
,str
]) – Either the loss object instance, or the name of the loss function. Currently available losses areSiameseLoss
andTripletLoss
.configure_optimizer (
Optional
[Callable
[[~AnyDNN],Union
[~AnyOptimizer,Tuple
[~AnyOptimizer, ~AnyScheduler]]]]) –A function that allows you to provide a custom optimizer and learning rate. The function should take one input - the embedding model, and return either just an optimizer or a tuple of an optimizer and a learning rate scheduler.
For Keras, you should provide the learning rate scheduler directly to the optimizer using the learning_rate argument in its
__init__
function - and this should be an instance of a subclass oftf.keras.optimizer.schedulers.LearningRateScheduler
- and not an instance of the callback (tf.keras.callbacks.LearningRateScheduler
).learning_rate (
float
) – Learning rate for the default optimizer. If you provide a custom optimizer, this learning rate will not apply.scheduler_step (
str
) –At which interval should the learning rate sheduler’s step function be called. Valid options are “batch” and “epoch”.
For Keras, this option has no effect, as
LearningRateScheduler
instances are called by the optimizer on each step automatically.callbacks (
Optional
[List
[BaseCallback
]]) – A list of callbacks. The progress bar callback will be pre-prended to this list.device (
str
) – The device to which to move the model. Supported options are"cpu"
and"cuda"
(for GPU).