finetuner.tailor package#

Subpackages#

Submodules#

Module contents#

finetuner.tailor.to_embedding_model(model, layer_name=None, input_size=None, input_dtype='float32', freeze=False, projection_head=None, device='cpu', **kwargs)[source]#

Convert a general model from model to an embedding model.

Parameters
  • model (AnyDNN) – The DNN model to be converted.

  • layer_name (Optional[str]) – the name of the layer that is used for output embeddings. All layers after that layer will be removed. When set to None, then the last layer listed in embedding_layers will be used. To see all available names you can check name field of embedding_layers.

  • input_size (Optional[Tuple[int, …]]) – The input size of the DNN model.

  • input_dtype (str) – The input data type of the DNN model.

  • freeze (Union[bool, List[str]]) – if set as True, will freeze all layers before :py:`attr`:layer_name. If set as list of str, will freeze layers by names.

  • projection_head (Optional[ForwardRef]) – Attach a module at the end of model, this module should be always trainable.

  • device (Optional[str]) – The device to which to move the model. Supported options are "cpu" and "cuda" (for GPU).

Return type

AnyDNN

finetuner.tailor.display(model, input_size=None, input_dtype='float32')[source]#

Display the model architecture from summary in a table.

Parameters
  • model (AnyDNN) – The DNN model to display.

  • input_size (Optional[Tuple[int, …]]) – The input size of the DNN model.

  • input_dtype (str) – The input data type of the DNN model.

Return type

None