# Welcome to Finetuner!#

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks. It accompanies Jina to deliver the last mile of performance for domain-specific neural search applications.

🎛 Designed for finetuning: a human-in-the-loop deep learning tool for leveling up your pretrained models in domain-specific neural search applications.

🔱 Powerful yet intuitive: all you need is finetuner.fit() - a one-liner that unlocks rich features such as siamese/triplet network, metric learning, self-supervised pretraining, layer pruning, weights freezing, dimensionality reduction.

⚛️ Framework-agnostic: promise an identical API & user experience on PyTorch, Tensorflow/Keras and PaddlePaddle deep learning backends.

🧈 DocArray integration: buttery smooth integration with DocArray, reducing the cost of context-switch between experiment and production.

## Quick start#

1. Make sure that you have Python 3.7+ installed on Linux/MacOS. You have one of PyTorch (>=1.9), Tensorflow (>=2.5) or PaddlePaddle installed.

pip install finetuner

2. In this example, we want to tune the embedding vectors from a ResNet18 on a customized Celeba dataset.Finetuner accepts docarray DocumentArray, so we load CelebA image into this format:

from docarray import DocumentArray

data = DocumentArray.from_files('img_align_celeba/*.jpg')

def preproc(doc):
return (
.set_image_tensor_normalization()
.set_image_tensor_channel_axis(-1, 0)
)  # No need for changing channel axes line if you are using tf/keras

data.apply(preproc)

3. Let’s write a model with any of the following frameworks:

import torchvision

resnet = torchvision.models.resnet18(pretrained=True)

import tensorflow as tf

resnet = tf.keras.applications.resnet18.ResNet18(weights='imagenet')

import paddle


4. Now feed the model and Celeba data into Finetuner.

import finetuner as ft

tuned_model = ft.fit(
model=resnet,
train_data=data,
loss='TripletLoss',
epochs=20,
device='cuda',
batch_size=128,
to_embedding_model=True,
input_size=(3, 224, 224),
layer_name='adaptiveavgpool2d_67', # layer before fc as feature extractor
freeze=False,
)


Now that you’re set up, let’s dive into more of how Finetuner works and improves the performance of your neural search apps.

## Next steps#

Finetuner is extremely easy to learn: all you need is finetuner.fit()!