finetuner.tailor.keras package#

Submodules#

Module contents#

class finetuner.tailor.keras.KerasTailor(model, input_size=None, input_dtype='float32', device='cpu')[source]#

Bases: finetuner.tailor.base.BaseTailor

Tailor class for Keras DNN models.

Tailor converts a general DNN model into an embedding model.

Parameters
  • model (AnyDNN) – a general DNN model

  • input_size (Optional[Tuple[int, …]]) – a sequence of integers defining the shape of the input tensor. Note, batch size is not part of input_size. It is required for PytorchTailor and PaddleTailor, but not C

  • input_dtype (str) – the data type of the input tensor.

  • device (Optional[str]) – The device to which to move the model. Supported options are "cpu" and "cuda" (for GPU).

summary(skip_identity_layer=False)[source]#

Interpret the DNN model and produce model information.

Parameters

skip_identity_layer (bool) – If skip identity layer.

Return type

LayerInfoType

Returns

The model information stored as dict.

to_embedding_model(layer_name=None, freeze=False, projection_head=None)[source]#

Convert a general model from model to an embedding model.

Parameters
  • layer_name (Optional[str]) – the name of the layer that is used for output embeddings. All layers after that layer will be removed. When set to None, then the last layer listed in embedding_layers will be used. To see all available names you can check name field of embedding_layers.

  • freeze (Union[bool, List[str]]) – if set as True, will freeze all layers before :py:`attr`:layer_name. If set as list of str, will freeze layers by names.

  • projection_head (Optional[ForwardRef]) – Attach a module at the end of model, this module should be always trainable.

Return type

AnyDNN

Returns

Converted embedding model.