finetuner.tuner.paddle.losses module#
- finetuner.tuner.paddle.losses.get_distance(embeddings, distance)[source]#
Get a matrix of pairwise distances between the embedings
- Return type
Tensor
- class finetuner.tuner.paddle.losses.PaddleTupleLoss(*args, **kwargs)[source]#
Bases:
paddle.nn.Layer
,finetuner.tuner.base.BaseLoss
[paddle.Tensor
]Base class for all paddle losses.
- class finetuner.tuner.paddle.losses.SiameseLoss(distance='cosine', margin=1.0, miner=None)[source]#
Bases:
finetuner.tuner.paddle.losses.PaddleTupleLoss
Computes the loss for a siamese network.
The loss for a pair of objects equals
is_sim * dist + (1 - is_sim) * max(0, margin - dist)
where
is_sim
equals 1 if the two objects are similar, and 0 if they are not similar. Thedist
refers to the distance between the two objects, andmargin
is a number to help bound the loss for dissimilar objects.The final loss is the average over losses for all pairs given by the indices.
Initialize the loss instance
- Parameters
distance (
str
) – The type of distance to use, avalilable options are"cosine"
,"euclidean"
and"sqeuclidean"
margin (
float
) – The margin to use in loss calculation
- compute(embeddings, indices)[source]#
Compute the loss
- Parameters
embeddings (
Tensor
) – An[N, d]
tensor of embeddingsindices (
Tuple
[Tensor
,Tensor
,Tensor
]) – A list of tuple indices and target, where each element in the list contains three elements: the indices of the two objects in the pair, and their similarity (which equals 1 if they are similar, and 0 if they are dissimilar)
- Return type
Tensor
- get_default_miner(is_session_dataset)[source]#
Get the default miner for this loss, given the dataset type
- Return type
Union
[SiameseMiner
,SiameseSessionMiner
]
- class finetuner.tuner.paddle.losses.TripletLoss(distance='cosine', margin=1.0, miner=None)[source]#
Bases:
finetuner.tuner.paddle.losses.PaddleTupleLoss
Compute the loss for a triplet network.
The loss for a single triplet equals:
max(dist_pos - dist_neg + margin, 0)
where
dist_pos
is the distance between the anchor embedding and positive embedding,dist_neg
is the distance between the anchor and negative embedding, andmargin
represents a wedge between the desired anchor-negative and anchor-positive distances.The final loss is the average over losses for all triplets given by the indices.
Initialize the loss instance
- Parameters
distance (
str
) – The type of distance to use, avalilable options are"cosine"
,"euclidean"
and"sqeuclidean"
margin (
float
) – The margin to use in loss calculation
- compute(embeddings, indices)[source]#
Compute the loss
- Parameters
embeddings (
Tensor
) – An[N, d]
tensor of embeddingsindices (
Tuple
[Tensor
,Tensor
,Tensor
]) – A list of tuple indices, where each element in the list contains three elements: the index of anchor, positive match and negative match in the embeddings tensor
- Return type
Tensor
- get_default_miner(is_session_dataset)[source]#
Get the default miner for this loss, given the dataset type
- Return type
Union
[SiameseMiner
,SiameseSessionMiner
]
- class finetuner.tuner.paddle.losses.NTXentLoss(temperature=0.1)[source]#
Bases:
paddle.nn.Layer
,finetuner.tuner.base.BaseLoss
[paddle.Tensor
]Compute the NTXent (Normalized Temeprature Cross-Entropy) loss.
This loss function is a temperature-adjusted cross-entropy loss, as defined in the SimCLR paper <https://arxiv.org/abs/2002.05709>. It operates on batches where there are two views of each instance
Initialize the loss instance.
- Parameters
temerature – The temperature parameter