====== DokuLLM Plugin - Prompt System and Namespace Documentation ====== This document explains how the DokuLLM plugin's prompt system works, including the namespace structure, prompt hierarchy, available placeholders, system prompt extensions, and how to customize prompts. ===== Namespace Structure ===== The ''dokullm:'' namespace in DokuWiki serves as the central configuration and prompt management system for the DokuLLM plugin. This namespace contains all the prompt templates, profiles, and system configurations that control how the LLM processes content. ==== System Prompts ===== === Main System Prompt (''dokullm:profiles:default:system'') === The system prompt is the foundational instruction set that defines the LLM's role, behavior, and capabilities. It establishes: * Role Definition: The LLM acts as a helpful assistant specialized in text processing tasks * Content Guidelines: * Use clear and professional language * Maintain proper DokuWiki formatting * Follow structured content organization * Capabilities: * Access to templates, examples, and previous content * Ability to retrieve relevant information * Context-aware processing * Safety Constraints: * Focus on factual content * No personal opinions * Maintain professional tone === Command-Specific System Appendages === Each action can have an additional system prompt that extends the main system prompt: * ''create:system'' - Additional instructions for content creation * ''rewrite:system'' - Guidelines for text rewriting and improvement * ''summarize:system'' - Focus areas for summarization * ''analyze:system'' - Analysis framework and structure * ''expand:system'' - Guidelines for content expansion ===== Prompt Placeholders ===== The prompts use several special placeholders that are automatically populated: ==== Core Placeholders ==== * ''{text}'' - The main content being processed * ''{template}'' - Template content for structure guidance * ''{examples}'' - Style examples from previous content * ''{snippets}'' - Relevant text snippets from similar content * ''{previous}'' - Content from previous related documents * ''{prompt}'' - Custom user prompt ==== Metadata Placeholders ==== * ''{current_date}'' - Current date * ''{previous_date}'' - Date of previous document (if applicable) * ''{action}'' - Current action being performed * ''{think}'' - Thinking mode switch (/think or /no_think) ===== Template Integration ===== ==== Template Metadata ==== Templates are referenced using metadata directives: * ''~~LLM_TEMPLATE:template_id~~'' * ''~~LLM_EXAMPLES:example1,example2~~'' * ''~~LLM_PREVIOUS:previous_page_id~~'' ==== Automatic Template Discovery ==== When no template is specified, the system can automatically find relevant templates based on content similarity using ChromaDB vector search. ===== Context Management ===== The system provides rich context through: 1. Template Context: Structural guidance from template documents 2. Style Context: Examples showing preferred writing style 3. Content Context: Relevant snippets from similar documents 4. Historical Context: Previous related documents for continuity ===== Thinking Process ===== When enabled, the LLM can engage in deeper reasoning processes, with the thinking output optionally displayed to users. This is controlled by the ''{think}'' placeholder which resolves to ''/think'' or ''/no_think''. ===== Prompt Hierarchy ===== The DokuLLM plugin uses a hierarchical prompt system that allows for flexible customization using DokuWiki pages: ==== 1. Default Prompts ==== * Located in the ''dokullm:profiles:default:'' namespace * These are the base prompts that ship with the plugin * Used when no custom prompts are defined * Named according to their function (e.g., ''summarize'', ''expand'') ==== 2. Profile Prompts ==== * Located in the ''dokullm:profiles:[PROFILE_NAME]:'' namespace * Allow for different prompt configurations for different use cases * Can override default prompts selectively * Activated through plugin configuration ==== 3. Custom Prompts ==== * Can be created in custom namespaces under ''dokullm:profiles:'' * Completely override default and profile prompts * Allow for organization-specific customizations * Take precedence over all other prompt types ===== Prompt Structure ===== Each prompt page follows a simple structure with placeholders: ==== Simple Prompt Format ==== * Contains direct instructions to the LLM * Uses placeholders for dynamic content insertion * Does not require complex sectioning ===== Available Placeholders ===== The plugin provides several placeholders that can be used in prompts: ==== Content Placeholders ==== **{text}** * The main content being processed * Could be the entire document or selected text * Automatically populated by the plugin **{template}** * Content of the template associated with this page * Used when applying template-based structures **{examples}** * Example content related to the current task * Helps guide the LLM with specific examples **{snippets}** * Relevant text snippets from similar documents * Provides context from related content **{previous}** * Content from a previous version or related page * Useful for continuation or comparison tasks **{prompt}** * Custom instructions provided by the user * Allows for dynamic user input ==== Context Placeholders ==== **{current_date}** * Current date in ISO format * Useful for time-sensitive operations **{previous_date}** * Date of previous related document * Useful for tracking changes over time **{action}** * The current action being performed * Can be used for action-specific instructions **{think}** * Thinking mode indicator (/think or /no_think) * Controls whether reasoning process is shown ===== Creating Custom Prompts ===== ==== Basic Process ==== 1. **Identify the Action** * Determine which action you want to customize * Check existing prompt names in the ''dokullm:profiles:default:'' namespace 2. **Create Prompt Page** * Create a new page with the same name in your profile namespace * Use the same structure as existing prompts 3. **Add Placeholders** * Incorporate relevant placeholders for dynamic content * Ensure all required placeholders are included 4. **Test and Refine** * Test the prompt with various content types * Refine based on the quality of results ==== Example Prompt Structure ==== A typical prompt page might look like this: Summarize the following text in a concise manner: {text} ===== System Prompt Extensions ===== The plugin supports a hierarchical system prompt architecture that allows for fine-grained control over LLM behavior for specific actions. This extension mechanism enables you to provide additional context and instructions that are specific to individual operations. ==== Extension Mechanism ==== The system works by automatically checking for command-specific system prompt extensions when processing any action. The extension follows this page naming convention: ''dokullm:profiles:[PROFILE_NAME]:[ACTION_NAME]:system'' For example: * ''dokullm:profiles:default:summarize:system'' * ''dokullm:profiles:default:create:system'' * ''dokullm:profiles:medical:analyze:system'' ==== How It Works ==== 1. **Base System Prompt Loading**: When any action is initiated, the main system prompt is first loaded from: ''dokullm:profiles:[PROFILE_NAME]:system'' 2. **Command-Specific Extension Check**: The system then checks if a command-specific system prompt extension exists: ''dokullm:profiles:[PROFILE_NAME]:[ACTION_NAME]:system'' 3. **Automatic Appendage**: If the extension exists, its content is automatically appended to the base system prompt with a newline separator. 4. **Fallback Behavior**: If the extension doesn't exist, the system simply uses the base system prompt without any additional instructions. ==== Use Cases ==== === Action-Specific Guidelines === Command extensions are ideal for providing action-specific guidance that supplements the general system instructions. For example: Base System Prompt (''dokullm:profiles:default:system''): You are a helpful assistant specialized in text processing tasks... Summarize Extension (''dokullm:profiles:default:summarize:system''): When summarizing content, focus on the key findings and conclusions. Create a concise executive summary that captures the most important information. Limit your summary to 3-5 bullet points. Combined Result: You are a helpful assistant specialized in text processing tasks... [base instructions...] When summarizing content, focus on the key findings and conclusions. Create a concise executive summary that captures the most important information. Limit your summary to 3-5 bullet points. === Specialized Processing Instructions === Extensions can provide detailed processing instructions that are unique to each action: * Analysis Extension: Specify analytical frameworks and evaluation criteria * Comparison Extension: Define comparison methodologies and focus areas * Rewrite Extension: Provide style guidelines and content restructuring rules * Create Extension: Outline content generation templates and required sections ==== Best Practices ==== 1. **Complementary Content**: Extensions should complement, not contradict, the base system prompt 2. **Concise Instructions**: Keep extensions focused and specific to the action 3. **Consistent Formatting**: Maintain consistent style with the base system prompt 4. **Profile Awareness**: Create profile-specific extensions when using multiple profiles 5. **Testing**: Test extensions thoroughly to ensure they produce desired behavior ==== Example Structure ==== dokullm:profiles:default: ├── system # Base system prompt ├── summarize # Summarize action prompt ├── summarize:system # Summarize-specific system extension ├── expand # Expand action prompt ├── expand:system # Expand-specific system extension ├── grammar # Grammar action prompt └── grammar:system # Grammar-specific system extension This extension mechanism provides powerful flexibility in customizing LLM behavior for different actions while maintaining a consistent foundational set of instructions. ===== Prompt Best Practices ===== ==== Writing Effective Prompts ==== * **Be Specific**: Clearly define the task and expected output * **Provide Context**: Include relevant background information * **Use Examples**: Show examples of desired output format * **Set Constraints**: Define limitations on length, style, or content * **Guide Formatting**: Specify output format requirements ==== Placeholder Usage ==== * **Always Include Required Placeholders**: Missing placeholders can cause errors * **Use Contextual Placeholders**: Only include placeholders that add value * **Test Placeholder Substitution**: Verify that placeholders are correctly replaced * **Document Custom Placeholders**: If adding new placeholders, document their purpose ==== Organization and Maintenance ==== * **Consistent Naming**: Use consistent naming conventions for prompt pages * **Version Control**: Keep prompts under version control * **Documentation**: Document significant changes to prompts * **Backup**: Keep backups of working prompt configurations ===== Troubleshooting Prompts ===== Common issues and solutions: * **Empty Responses**: Check that all required placeholders are provided * **Irrelevant Content**: Improve context and constraints in the prompt * **Formatting Issues**: Be more specific about output format requirements * **Performance Problems**: Simplify prompts or reduce context length * **Inconsistent Results**: Add more specific guidance and examples For debugging prompt issues, enable debug logging in the plugin configuration to see the actual prompts being sent to the LLM. ===== Profile System ===== ==== Default Profile ==== The ''default'' profile contains standard prompts for general use cases. Each prompt in this profile is designed for specific tasks: * create: Generate new content from scratch * rewrite: Improve existing text clarity and grammar * summarize: Create concise summaries * expand: Add more details to topics * analyze: Provide detailed analysis * grammar: Check grammar and spelling ==== Custom Profiles ==== Additional profiles can be created under ''dokullm:profiles:'' for specialized use cases: * Different content types * Various writing styles * Organization-specific requirements * Language variations ===== Best Practices ===== 1. Prompt Design: Keep prompts clear and specific 2. Template Structure: Use consistent formatting in templates 3. Example Selection: Choose representative examples 4. Metadata Usage: Properly tag pages with relevant metadata 5. Profile Organization: Group related prompts in appropriate profiles The ''dokullm:'' namespace provides a flexible, extensible framework for managing LLM interactions within DokuWiki, enabling sophisticated AI-assisted content creation while maintaining consistency and quality control.