====== DokuLLM Plugin - Prompt System ====== This document explains how the DokuLLM plugin's prompt system works, including prompt hierarchy, available placeholders, and how to customize prompts. ===== Prompt Hierarchy ===== The DokuLLM plugin uses a hierarchical prompt system that allows for flexible customization: ==== 1. Default Prompts ==== * Located in ''lib/plugins/dokullm/doc/default/'' * These are the base prompts that ship with the plugin * Used when no custom prompts are defined * Named according to their function (e.g., ''write.txt'', ''analyze.txt'') ==== 2. Profile Prompts ==== * Located in ''lib/plugins/dokullm/doc/profiles/'' * Allow for different prompt configurations for different use cases * Can override default prompts selectively * Activated through plugin configuration ==== 3. Custom Prompts ==== * Can be placed in ''lib/plugins/dokullm/doc/custom/'' * Completely override default and profile prompts * Allow for organization-specific customizations * Take precedence over all other prompt types ===== Prompt Structure ===== Each prompt file follows a specific structure: ==== System Section ==== * Defines the role and behavior of the LLM * Provides context about the task * Sets expectations for the response format ==== User Section ==== * Contains the actual instruction * Incorporates user content and context * Uses placeholders for dynamic content ==== Output Format ==== * Specifies how the response should be structured * May include examples of expected output * Helps ensure consistent results ===== Available Placeholders ===== The plugin provides several placeholders that can be used in prompts: ==== Content Placeholders ==== **{{content}}** * The main content being processed * Could be the entire document or selected text * Automatically populated by the plugin **{{selected_text}}** * Specifically the text selected by the user * Empty if no text was selected * Useful for targeted operations ==== Context Placeholders ==== **{{page_title}}** * The title of the current wiki page * Helps provide context to the LLM **{{page_id}}** * The internal ID of the current page * Useful for referencing or tracking **{{namespace}}** * The namespace/category of the current page * Can be used for domain-specific processing ==== Metadata Placeholders ==== **{{template}}** * Content of the template associated with this page * Used when applying template-based structures **{{examples}}** * Example content related to the current task * Helps guide the LLM with specific examples **{{previous_content}}** * Content from a previous version or related page * Useful for continuation or comparison tasks ==== Configuration Placeholders ==== **{{model}}** * The LLM model being used * Can be referenced in model-specific instructions **{{language}}** * The language setting of the wiki * Useful for language-specific processing **{{date}}** * Current date in ISO format * Useful for time-sensitive operations ===== Creating Custom Prompts ===== ==== Basic Process ==== 1. **Identify the Action** * Determine which action you want to customize * Check existing prompt names in ''doc/default/'' 2. **Create Prompt File** * Create a new file with the same name in your custom directory * Use the same structure as existing prompts 3. **Add Placeholders** * Incorporate relevant placeholders for dynamic content * Ensure all required placeholders are included 4. **Test and Refine** * Test the prompt with various content types * Refine based on the quality of results ==== Example Prompt Structure ==== A typical prompt file might look like this: ``` You are an expert content assistant helping with {{page_title}}. Your task is to {{action_description}}. Use the following content as your source: {{content}} When responding: 1. Be concise but thorough 2. Use clear, professional language 3. Format your response in DokuWiki syntax 4. Focus specifically on the selected text if provided Provide your response below: ``` ===== Prompt Variables ===== The plugin supports dynamic variables in prompts: ==== Action-Specific Variables ==== * **{{action}}** - The current action being performed * **{{action_description}}** - Description of the action * **{{expected_output}}** - Description of expected output format ==== User Variables ==== * **{{user_prompt}}** - Custom instructions provided by the user * **{{user_role}}** - Role of the user (if configured) * **{{user_preferences}}** - User-specific preferences ===== Prompt Best Practices ===== ==== Writing Effective Prompts ==== * **Be Specific**: Clearly define the task and expected output * **Provide Context**: Include relevant background information * **Use Examples**: Show examples of desired output format * **Set Constraints**: Define limitations on length, style, or content * **Guide Formatting**: Specify output format requirements ==== Placeholder Usage ==== * **Always Include Required Placeholders**: Missing placeholders can cause errors * **Use Contextual Placeholders**: Only include placeholders that add value * **Test Placeholder Substitution**: Verify that placeholders are correctly replaced * **Document Custom Placeholders**: If adding new placeholders, document their purpose ==== Organization and Maintenance ==== * **Consistent Naming**: Use consistent naming conventions for prompt files * **Version Control**: Keep prompts under version control * **Documentation**: Document significant changes to prompts * **Backup**: Keep backups of working prompt configurations ===== Troubleshooting Prompts ===== Common issues and solutions: * **Empty Responses**: Check that all required placeholders are provided * **Irrelevant Content**: Improve context and constraints in the prompt * **Formatting Issues**: Be more specific about output format requirements * **Performance Problems**: Simplify prompts or reduce context length * **Inconsistent Results**: Add more specific guidance and examples For debugging prompt issues, enable debug logging in the plugin configuration to see the actual prompts being sent to the LLM.