finetuner package#

Subpackages#

Submodules#

Module contents#

finetuner.fit(model: AnyDNN, train_data: DocumentArray, eval_data: Optional[DocumentArray] = None, epochs: int = 10, batch_size: int = 256, loss: Union[str, AnyDNN] = 'SiameseLoss', configure_optimizer: Optional[Callable[[AnyDNN], Union[AnyOptimizer, Tuple[AnyOptimizer, AnyScheduler]]]] = None, learning_rate: float = 0.001, scheduler_step: str = 'batch', device: str = 'cpu', preprocess_fn: Optional[PreprocFnType] = None, collate_fn: Optional[CollateFnType] = None, num_items_per_class: Optional[int] = None, callbacks: Optional[List[BaseCallback]] = None, num_workers: int = 0) AnyDNN[source]#
finetuner.fit(model: AnyDNN, train_data: DocumentArray, eval_data: Optional[DocumentArray] = None, epochs: int = 10, batch_size: int = 256, loss: Union[str, AnyDNN] = 'SiameseLoss', configure_optimizer: Optional[Callable[[AnyDNN], Union[AnyOptimizer, Tuple[AnyOptimizer, AnyScheduler]]]] = None, learning_rate: float = 0.001, scheduler_step: str = 'batch', device: str = 'cpu', preprocess_fn: Optional[PreprocFnType] = None, collate_fn: Optional[CollateFnType] = None, num_items_per_class: Optional[int] = None, callbacks: Optional[List[BaseCallback]] = None, num_workers: int = 0, to_embedding_model: bool = True, input_size: Optional[Tuple[int, ...]] = None, input_dtype: str = 'float32', layer_name: Optional[str] = None, freeze: Union[bool, List[str]] = False, projection_head: Optional[AnyDNN] = None) AnyDNN
Return type

AnyDNN