LLM Tool Calling (ReAct Agent)

PiSovereign includes a ReAct (Reason + Act) agent that enables the LLM to autonomously invoke tools — weather lookups, calendar queries, web searches, and more — instead of relying solely on rigid command parsing.

How It Works

When a user sends a general question (AgentCommand::Ask), the system follows this flow:

  1. Collect tools — The ToolRegistry asks each wired port which tool definitions are available (e.g., if no weather port is configured, get_weather is omitted).
  2. LLM + tools — The conversation history and tool JSON schemas are sent to Ollama’s /api/chat endpoint with the tools parameter.
  3. Parse response — The LLM either returns a final text response or requests one or more tool calls.
  4. Execute tools — If tool calls are returned, the ToolExecutor dispatches each call to the appropriate port, collects results, and appends them as MessageRole::Tool messages to the conversation.
  5. Loop — Steps 2–4 repeat until the LLM produces a final response or a configurable iteration limit / timeout is reached.
User → LLM (with tool schemas)
         ├─ Final text → done
         └─ Tool calls → execute → append results → loop back to LLM

Architecture

The implementation follows Clean Architecture:

LayerComponentCrate
DomainToolDefinitiondomain
DomainToolCall, ToolResult, ToolCallingResultdomain
DomainMessageRole::Tool, ChatMessage::tool()domain
ApplicationToolRegistryPortapplication
ApplicationToolExecutorPortapplication
ApplicationInferencePort::generate_with_tools()application
ApplicationReActAgentServiceapplication
InfrastructureToolRegistryinfrastructure
InfrastructureToolExecutorinfrastructure
InfrastructureOllamaInferenceAdapter (extended)infrastructure
PresentationWired in main.rs, used in chat handlerspresentation_http

Available Tools

The following 18 tools are registered when their corresponding ports are wired:

ToolPort RequiredDescription
get_weatherWeatherPortCurrent weather and forecast
search_webWebSearchPortWeb search via Brave / DuckDuckGo
list_calendar_eventsCalendarPortList upcoming calendar events
create_calendar_eventCalendarPortCreate a new calendar event
search_contactsContactPortSearch contacts by name/email
get_contactContactPortGet full contact details by ID
list_tasksTaskPortList tasks/todos with filters
create_taskTaskPortCreate a new task
complete_taskTaskPortMark a task as completed
create_reminderReminderPortSchedule a reminder
list_remindersReminderPortList active reminders
search_transitTransitPortSearch public transit connections
store_memoryMemoryStoreStore a fact in long-term memory
recall_memoryMemoryStoreRecall facts from memory
execute_codeCodeExecutionPortRun code in a sandboxed container
search_emailsEmailPortSearch emails by query
draft_emailEmailPort + DraftStorePortDraft an email
send_emailEmailPortSend an email

Configuration

Add to config.toml:

[agent.tool_calling]
# Enable/disable the ReAct agent (default: true)
enabled = true

# Maximum ReAct loop iterations before forcing a final answer
max_iterations = 5

# Timeout per individual tool execution (seconds)
iteration_timeout_secs = 30

# Total timeout for the entire ReAct loop (seconds)
total_timeout_secs = 120

# Run tool calls in parallel when multiple are requested
parallel_tool_execution = true

# Tools that require user approval before execution (future use)
require_approval_for = []

When enabled = false, the system falls back to the standard ChatService::chat_with_context flow without any tool calling.

Relationship to AgentService

The ReAct agent runs alongside the existing AgentService:

  • AgentService handles all structured commands (AgentCommand variants like GetWeather, SearchWeb, CreateTask, etc.) via pattern matching and dedicated handler methods.
  • ReActAgentService handles general questions (AgentCommand::Ask) by letting the LLM decide which tools to call.

The command parsing flow remains unchanged — AgentService::parse_command() still classifies user input. Only Ask commands are routed through the ReAct agent when it’s enabled.

Extending with New Tools

To add a new tool:

  1. Define the port in crates/application/src/ports/ (if not already existing).
  2. Add a tool definition in ToolRegistry — create a def_your_tool() method returning a ToolDefinition with parameter schemas.
  3. Add execution logic in ToolExecutor — create an exec_your_tool() method that extracts arguments, calls the port, and formats the result.
  4. Wire the port in ToolRegistry::collect_tools() and ToolExecutor::execute() dispatch.
  5. Connect in main.rs — pass the port Arc to both ToolRegistry and ToolExecutor via with_your_port() builder methods.

Decorator Forwarding

All inference port decorators forward generate_with_tools() to their inner adapter:

  • SanitizedInferencePort — forwards directly (no sanitization for tool iterations)
  • CachedInferenceAdapter — forwards without caching (tool iterations are non-deterministic)
  • SemanticCachedInferenceAdapter — forwards without semantic caching
  • DegradedInferenceAdapter — forwards with circuit-breaker tracking
  • ModelRoutingAdapter — routes to the most capable (fallback) model

Relationship to Agentic Mode

The ReAct agent handles single-turn tool calling — one user query, one LLM loop deciding which tools to invoke. Agentic Mode extends this to multi-agent orchestration:

AspectReAct AgentAgentic Mode
ScopeSingle queryComplex multi-step task
Agents1 LLM loopMultiple parallel sub-agents
EndpointPOST /v1/chatPOST /v1/agentic/tasks
ProgressSynchronous or SSE chat streamSSE task progress stream
Config[agent.tool_calling][agentic]

Each agentic sub-agent internally uses the same ReAct tool-calling loop. The orchestrator (AgenticOrchestrator) decomposes the user’s request, spawns sub-agents, and aggregates their results.

See API Reference — Agentic Tasks for endpoint documentation.