<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/rss.xsl.xml"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
    <title>Changes in ChatModel.php</title>
    <description></description>
    <language>en</language>
    <copyright>Copyright 2025</copyright>
    <generator>Java</generator><item>
        <title>7be8078ef9026e317a5c01f90a94183276bbbbd2 - allow models to have a zero token limit</title>
        <link>http://127.0.0.1:8080/history/plugin/aichat/Model/Ollama/ChatModel.php#7be8078ef9026e317a5c01f90a94183276bbbbd2</link>
        <description>allow models to have a zero token limitThis allows for configuring completely unknown models. For these modelsno token limit is known and we will simply do not apply any. Instead wetrust that the model will be either large enough to handle our input orat least throw useful error messages.

            List of files:
            /plugin/aichat/Model/Ollama/ChatModel.php</description>
        <pubDate>Tue, 15 Apr 2025 13:15:03 +0000</pubDate>
        <dc:creator>Andreas Gohr &lt;andi@splitbrain.org&gt;</dc:creator>
    </item>
<item>
        <title>d481c63ccb6ead31f97b7b0119104ee25a34de28 - ollama: remove thinking part from deepseek answers</title>
        <link>http://127.0.0.1:8080/history/plugin/aichat/Model/Ollama/ChatModel.php#d481c63ccb6ead31f97b7b0119104ee25a34de28</link>
        <description>ollama: remove thinking part from deepseek answers

            List of files:
            /plugin/aichat/Model/Ollama/ChatModel.php</description>
        <pubDate>Tue, 25 Feb 2025 11:43:49 +0000</pubDate>
        <dc:creator>Andreas Gohr &lt;andi@splitbrain.org&gt;</dc:creator>
    </item>
<item>
        <title>4dd0657e5ac7e828b159f619391769caa2e5332f - allow to set arbitrary models</title>
        <link>http://127.0.0.1:8080/history/plugin/aichat/Model/Ollama/ChatModel.php#4dd0657e5ac7e828b159f619391769caa2e5332f</link>
        <description>allow to set arbitrary modelsWe now initialize a model configuration even if we have no info inmodel.json using some default values for the token limits.Models can implement the loadUnkonwModelInfo() method to fetch the infofrom an API if such a thing exist. Implemented for gemini and ollamacurrently.

            List of files:
            /plugin/aichat/Model/Ollama/ChatModel.php</description>
        <pubDate>Thu, 06 Feb 2025 16:34:23 +0000</pubDate>
        <dc:creator>Andreas Gohr &lt;andi@splitbrain.org&gt;</dc:creator>
    </item>
<item>
        <title>074b7701298bfc2f8fb181bdb361a02be5a51191 - added support for groq</title>
        <link>http://127.0.0.1:8080/history/plugin/aichat/Model/Ollama/ChatModel.php#074b7701298bfc2f8fb181bdb361a02be5a51191</link>
        <description>added support for groq

            List of files:
            /plugin/aichat/Model/Ollama/ChatModel.php</description>
        <pubDate>Mon, 22 Apr 2024 16:26:05 +0000</pubDate>
        <dc:creator>Andreas Gohr &lt;andi@splitbrain.org&gt;</dc:creator>
    </item>
</channel>
</rss>
