History log of /plugin/aichat/Model/Ollama/EmbeddingModel.php (Results 1 – 6 of 6)
Revision Date Author Comments
# 7be8078e 15-Apr-2025 Andreas Gohr <andi@splitbrain.org>

allow models to have a zero token limit

This allows for configuring completely unknown models. For these models
no token limit is known and we will simply do not apply any. Instead we
trust that the

allow models to have a zero token limit

This allows for configuring completely unknown models. For these models
no token limit is known and we will simply do not apply any. Instead we
trust that the model will be either large enough to handle our input or
at least throw useful error messages.

show more ...


# cc7172ce 25-Feb-2025 Andreas Gohr <andi@splitbrain.org>

ollama: fix embed endpoint and error in model file

We expect /api to be part of the configured base URL so it should not
be part of the call itself.


# 94b5d70e 14-Feb-2025 Max Theisen <96209029+MaxThFe@users.noreply.github.com>

Update EmbeddingModel.php to remove the :lastest suffix


# d2192bba 13-Feb-2025 Max Theisen <96209029+MaxThFe@users.noreply.github.com>

Update EmbeddingModel.php for new Ollama API


# 4dd0657e 06-Feb-2025 Andreas Gohr <andi@splitbrain.org>

allow to set arbitrary models

We now initialize a model configuration even if we have no info in
model.json using some default values for the token limits.

Models can implement the loadUnkonwModelI

allow to set arbitrary models

We now initialize a model configuration even if we have no info in
model.json using some default values for the token limits.

Models can implement the loadUnkonwModelInfo() method to fetch the info
from an API if such a thing exist. Implemented for gemini and ollama
currently.

show more ...


# 074b7701 22-Apr-2024 Andreas Gohr <andi@splitbrain.org>

added support for groq