# Encode Documents#

Once fine-tuning is finished, it’s time to actually use the model. You can use the fine-tuned models directly to encode DocumentArray objects or setting up an encoding service.

## Embed DocumentArray#

To embed a DocumentArray with a fine-tuned model, you can get the model of your Run via the get_model function and embed it via the encode function:

from docarray import DocumentArray, Document
import finetuner

token = finetuner.get_token()
run = finetuner.get_run(
experiment_name='YOUR-EXPERIMENT',
run_name='YOUR-RUN'
)

model = finetuner.get_model(run.artifact_id, token=token)

da = DocumentArray([Document(text='some text to encode')])

finetuner.encode(model=model, data=da)

for doc in da:
print(f'Text of the returned document: {doc.text}')
print(f'Shape of the embedding: {doc.embedding.shape}')

from docarray import DocumentArray, Document
import finetuner

model = finetuner.get_model('/path/to/YOUR-MODEL.zip')

da = DocumentArray([Document(text='some text to encode')])

finetuner.encode(model=model, data=da)

for doc in da:
print(f'Text of the returned document: {doc.text}')
print(f'Shape of the embedding: {doc.embedding.shape}')

Text of the returned document: some text to encode
Shape of the embedding: (768,)


Inference with ONNX

In case you set to_onnx=True when calling finetuner.fit function, please use model = finetuner.get_model('/path/to/YOUR-MODEL.zip', is_onnx=True)

## Fine-tuned model as Executor#

Finetuner, being part of the Jina ecosystem, provides a convenient way to use tuned models via Jina Executors.

We’ve created the FinetunerExecutor which can be added in a Jina Flow and load any tuned model. More specifically, the executor exposes an /encode endpoint that embeds Documents using the fine-tuned model.

Loading a tuned model is simple! You just need to provide a few parameters under the uses_with argument when adding the FinetunerExecutor to the Flow. You have three options:

import finetuner
from jina import Flow

token = finetuner.get_token()
run = finetuner.get_run(
experiment_name='YOUR-EXPERIMENT',
run_name='YOUR-RUN'
)

uses='jinahub+docker://FinetunerExecutor/v0.10.2',  # use v0.10.2-gpu for gpu executor.
uses_with={'artifact': run.artifact_id, 'token': token},
)

from jina import Flow

uses='jinahub+docker://FinetunerExecutor/v0.10.2',  # use v0.10.2-gpu for gpu executor.
uses_with={'artifact': '/mnt/YOUR-MODEL.zip'},
volumes=['/your/local/path/:/mnt']  # mount your model path to docker.
)

jtype: Flow
with:
port: 51000
protocol: grpc
executors:
uses: jinahub+docker://FinetunerExecutor/v0.10.2
with:
artifact: 'COPY-YOUR-ARTIFACT-ID-HERE'
token: 'COPY-YOUR-TOKEN-HERE'  # or better set as env


As you can see, it’s super easy! If you did not call save_artifact, you need to provide the artifact_id and token. FinetunerExecutor will automatically pull your model from the cloud storage to the container.

On the other hand, if you have saved artifact locally, please mount the zipped artifact to the docker container. FinetunerExecutor will unzip the artifact and load models.

You can start your flow with:

with f:
# in this example, we fine-tuned a BERT model and embed a Document..
returned_docs = f.post(
on='/encode',
inputs=DocumentArray(
[
Document(
text='some text to encode'
)
]
)
)

for doc in returned_docs:
print(f'Text of the returned document: {doc.text}')
print(f'Shape of the embedding: {doc.embedding.shape}')

Text of the returned document: some text to encode
Shape of the embedding: (768,)


In order to see what other options you can specify when initializing the executor, please go to the FinetunerExecutor page and click on Arguments on the top-right side.

FinetunerExecutor parameters

The only required argument is artifact. We provide default values for others.

## Special case: Artifacts with CLIP models#

If your fine-tuning job was executed on a CLIP model, your artifact contains two models: clip-vision and clip-text. The vision model allows you to embed images and the text model can encode text passages into the same vector space. To use those models, you have to provide the name of the model via an additional select_model parameter to the get_model function.

from docarray import DocumentArray, Document
import finetuner

token = finetuner.get_token()
run = finetuner.get_run(
experiment_name='YOUR-EXPERIMENT',
run_name='YOUR-RUN'
)

model = finetuner.get_model(run.artifact_id, token=token, select_model='clip-text')

da = DocumentArray([Document(text='some text to encode')])

finetuner.encode(model=model, data=da)

from docarray import DocumentArray, Document
import finetuner

token = finetuner.get_token()
run = finetuner.get_run(
experiment_name='YOUR-EXPERIMENT',
run_name='YOUR-RUN'
)

model = finetuner.get_model(run.artifact_id, token=token, select_model='clip-vision')

da = DocumentArray([Document(text='~/Pictures/my_img.png')])

finetuner.encode(model=model, data=da)


If you want to host the CLIP models, you also have to provide the name of the model via the select_model parameter inside the uses_with attribute:

import finetuner
from jina import Flow

token = finetuner.get_token()
run = finetuner.get_run(
experiment_name='YOUR-EXPERIMENT',
run_name='YOUR-RUN'
)