piedomains.llm package¶
Submodules¶
piedomains.llm.config module¶
LLM configuration for domain classification.
- class piedomains.llm.config.LLMConfig(provider, model, api_key=None, base_url=None, max_tokens=500, temperature=0.1, categories=None, cost_limit_usd=10.0, usage_tracking=True)[source]¶
Bases:
objectConfiguration for LLM-based classification.
- provider¶
LLM provider (e.g., ‘openai’, ‘anthropic’, ‘google’)
- model¶
Model name (e.g., ‘gpt-4o’, ‘claude-3-5-sonnet-20241022’, ‘gemini-1.5-pro’)
- api_key¶
API key for the provider
- base_url¶
Optional base URL for custom endpoints
- max_tokens¶
Maximum tokens for response
- temperature¶
Temperature for response generation
- categories¶
List of classification categories
- cost_limit_usd¶
Maximum cost limit in USD
- usage_tracking¶
Whether to track API usage
- __init__(provider, model, api_key=None, base_url=None, max_tokens=500, temperature=0.1, categories=None, cost_limit_usd=10.0, usage_tracking=True)¶
piedomains.llm.prompts module¶
Prompt templates for LLM-based domain classification.
- piedomains.llm.prompts.get_classification_prompt(domain, content, categories, max_content_length=8000)[source]¶
Generate classification prompt for text-only analysis.
- piedomains.llm.prompts.get_multimodal_prompt(domain, content=None, categories=None, has_screenshot=False, max_content_length=6000)[source]¶
Generate classification prompt for multimodal analysis (text + image).
- Parameters:
- Return type:
- Returns:
Formatted prompt string
- piedomains.llm.prompts.get_custom_prompt(domain, content=None, categories=None, custom_instructions=None, has_screenshot=False)[source]¶
Generate a custom classification prompt.
- Parameters:
- Return type:
- Returns:
Formatted prompt string
piedomains.llm.response_parser module¶
Response parsing utilities for LLM classification results.
- piedomains.llm.response_parser.parse_llm_response(response_text)[source]¶
Parse LLM response into structured classification result.
- Parameters:
response_text (
str) – Raw response text from LLM- Return type:
- Returns:
Dictionary with parsed classification data
- Raises:
ValueError – If response cannot be parsed
Module contents¶
LLM-based classification utilities for piedomains.
- class piedomains.llm.LLMConfig(provider, model, api_key=None, base_url=None, max_tokens=500, temperature=0.1, categories=None, cost_limit_usd=10.0, usage_tracking=True)[source]¶
Bases:
objectConfiguration for LLM-based classification.
- provider¶
LLM provider (e.g., ‘openai’, ‘anthropic’, ‘google’)
- model¶
Model name (e.g., ‘gpt-4o’, ‘claude-3-5-sonnet-20241022’, ‘gemini-1.5-pro’)
- api_key¶
API key for the provider
- base_url¶
Optional base URL for custom endpoints
- max_tokens¶
Maximum tokens for response
- temperature¶
Temperature for response generation
- categories¶
List of classification categories
- cost_limit_usd¶
Maximum cost limit in USD
- usage_tracking¶
Whether to track API usage
- __init__(provider, model, api_key=None, base_url=None, max_tokens=500, temperature=0.1, categories=None, cost_limit_usd=10.0, usage_tracking=True)¶
- piedomains.llm.get_classification_prompt(domain, content, categories, max_content_length=8000)[source]¶
Generate classification prompt for text-only analysis.
- piedomains.llm.get_multimodal_prompt(domain, content=None, categories=None, has_screenshot=False, max_content_length=6000)[source]¶
Generate classification prompt for multimodal analysis (text + image).
- Parameters:
- Return type:
- Returns:
Formatted prompt string
- piedomains.llm.parse_llm_response(response_text)[source]¶
Parse LLM response into structured classification result.
- Parameters:
response_text (
str) – Raw response text from LLM- Return type:
- Returns:
Dictionary with parsed classification data
- Raises:
ValueError – If response cannot be parsed