xref: /plugin/dokullm/doc/llm.txt (revision de282fb9d0b41e32d4db8e90cbe255919c28fc7a)
1====== DokuLLM Plugin - LLM Configuration and Workflow ======
2
3This document explains how to configure Large Language Model (LLM) interactions and describes the workflow of the DokuLLM plugin.
4
5===== LLM Configuration =====
6
7The DokuLLM plugin supports various LLM providers through API integration. Configuration is done through DokuWiki's configuration manager.
8
9==== Supported LLM Providers ====
10
11The plugin has been tested with:
12  * Ollama (local/self-hosted)
13  * OpenAI API
14  * Anthropic Claude API
15  * Other OpenAI-compatible APIs
16
17==== Configuration Settings ====
18
19===== Core LLM Settings =====
20
21**LLM API URL**
22  * The base URL for your LLM API
23  * Examples:
24    * Ollama: ''http://localhost:11434/v1''
25    * OpenAI: ''https://api.openai.com/v1''
26    * Anthropic Claude: ''https://api.anthropic.com/v1''
27
28**LLM API Key**
29  * Authentication key for your LLM provider
30  * Leave empty if using local/self-hosted models (like Ollama)
31
32**Default LLM Model**
33  * The model to use for text processing
34  * Examples:
35    * Ollama: ''llama3'', ''mistral'', ''phi3''
36    * OpenAI: ''gpt-3.5-turbo'', ''gpt-4''
37    * Anthropic: ''claude-3-haiku-20240307'', ''claude-3-sonnet-20240229''
38
39===== Advanced LLM Settings =====
40
41**Temperature**
42  * Controls randomness in responses (0.0 = deterministic, 1.0 = creative)
43  * Recommended values: 0.3-0.7 for most use cases
44
45**Max Tokens**
46  * Maximum number of tokens in the LLM response
47  * Adjust based on your needs and model capabilities
48
49**System Prompt**
50  * Base instructions that guide the LLM's behavior
51  * Can be customized for your specific use case
52
53===== ChromaDB Integration =====
54
55**Enable ChromaDB**
56  * Enable or disable ChromaDB integration for document retrieval
57
58**ChromaDB Settings**
59  * Host, port, tenant, and database configuration for ChromaDB
60  * Required for advanced features like document search and context-aware responses
61
62**Ollama Embeddings**
63  * Host and port for Ollama server used for generating text embeddings
64  * Required when using ChromaDB for document storage
65
66===== LLM Workflow =====
67
68The DokuLLM plugin follows a specific workflow to process requests:
69
70==== 1. User Interaction ====
71
72  * User selects an action from the LLM toolbar in the editor
73  * User can select text or work with the entire document
74  * User may provide custom prompts for specific tasks
75
76==== 2. Context Preparation ====
77
78  * The plugin gathers relevant context:
79    * Current page content
80    * Selected text (if any)
81    * Page metadata
82    * Template information (if applicable)
83    * Related documents from ChromaDB (if enabled)
84
85==== 3. Prompt Construction ====
86
87  * System prompt is loaded based on the selected action
88  * User content is formatted according to the action requirements
89  * Context information is included to provide background
90  * Metadata and instructions are added to guide the LLM
91
92==== 4. LLM API Call ====
93
94  * Request is sent to the configured LLM API
95  * Parameters (model, temperature, max tokens) are applied
96  * Authentication headers are included if an API key is configured
97
98==== 5. Response Processing ====
99
100  * LLM response is received and parsed
101  * Tool calls are detected and processed if applicable
102  * Response is formatted for display in the editor
103
104==== 6. Result Integration ====
105
106  * Results are inserted into the editor at the appropriate location
107  * Metadata may be added to track LLM processing
108  * User can review and edit the LLM-generated content
109
110===== Available Actions =====
111
112The plugin provides several predefined actions:
113
114==== Content Generation ====
115  * **Write**: Generate new content based on a prompt
116  * **Continue**: Extend existing content
117  * **Rewrite**: Rephrase selected text
118  * **Summarize**: Create a summary of the content
119
120==== Content Analysis ====
121  * **Analyze**: Provide insights about the content
122  * **Check**: Review for grammar, style, and clarity
123  * **Explain**: Provide explanations for complex topics
124
125==== Structure and Organization ====
126  * **Outline**: Create an outline from content
127  * **Structure**: Organize content into sections
128  * **Template**: Apply a template structure
129
130==== Research and Retrieval ====
131  * **Query**: Search for related information
132  * **Cite**: Add citations to content
133  * **Expand**: Add more details to topics
134
135===== Custom Prompts =====
136
137Users can create custom prompts for specific tasks:
138
139  * Use the "Custom Prompt" option in the toolbar
140  * Enter specific instructions for the LLM
141  * Save frequently used prompts for reuse
142  * Combine with selected text for targeted processing
143
144===== Metadata Handling =====
145
146The plugin uses metadata to track LLM processing:
147
148  * Metadata is stored in the format ''~~LLM_ACTION:timestamp~~''
149  * Used to prevent duplicate processing
150  * Can be used to track content evolution
151  * Metadata can be configured to be visible or hidden
152
153===== Best Practices =====
154
155==== For Better Results ====
156  * Provide clear, specific prompts
157  * Select relevant text for processing
158  * Use appropriate temperature settings for your task
159  * Review and edit LLM-generated content before publishing
160
161==== For Performance ====
162  * Configure appropriate token limits
163  * Use efficient prompts to reduce processing time
164  * Enable caching where appropriate
165  * Monitor API usage and costs
166
167==== For Security ====
168  * Store API keys securely
169  * Review LLM-generated content before publishing
170  * Limit access to LLM features as needed
171  * Regularly update the plugin for security fixes
172
173===== Troubleshooting LLM Issues =====
174
175Common issues and solutions:
176
177  * **Poor quality responses**: Check prompt clarity and model selection
178  * **API errors**: Verify API key and endpoint configuration
179  * **Timeouts**: Reduce max tokens or use a faster model
180  * **Context limits**: Break large documents into smaller sections
181  * **Repetitive responses**: Increase temperature setting
182
183For persistent issues, check the plugin's error logs and consult the LLM provider's documentation.
184