Skip to main content

AI Configuration

The AI Configuration category controls the AI assistant backend that powers the Ask AI feature and TalonStrike in Hawkra workspaces. You can choose between Google Gemini, Anthropic Claude, or a self-hosted local LLM server.

AI Configuration panel

Settings Reference

SettingKeyTypeDefaultDescription
LLM Modellm_modeDropdowngeminigemini uses Google Gemini API. claude uses Anthropic Claude API. local uses a self-hosted LLM server. Legacy value cloud is accepted as an alias for gemini.
Gemini API Keygemini_api_keyStringEmptyGoogle AI Studio API key. Required when LLM mode is gemini. Get one from Google AI Studio.
Gemini Modelgemini_modelDropdowngemini-2.0-flashGemini model to use. Has no effect when LLM mode is claude or local.
Anthropic API Keyanthropic_api_keyStringEmptyAnthropic API key. Required when LLM mode is claude. Get one from Anthropic Console.
Anthropic Modelanthropic_modelDropdownclaude-sonnet-4-6Claude model to use. Has no effect when LLM mode is gemini or local.
Local LLM Serverlocal_llm_serverStringEmptyURL of your local LLM server. Required when LLM mode is local (e.g., http://ollama:11434).

Gemini Model Options

ModelDescription
gemini-2.0-flashFast responses with good quality. Lowest API cost. Recommended for most use cases.
gemini-2.0-proHigher quality with deeper reasoning. Good balance for complex security analysis.
gemini-2.5-proBest quality. Ideal for complex multi-step analysis.

Claude Model Options

ModelDescription
claude-sonnet-4-6Fast responses with excellent quality. Best balance of speed and capability. Recommended for most use cases.
claude-opus-4-6Most capable model with deepest reasoning. Ideal for complex multi-step security analysis.
claude-haiku-4-5Fastest and most cost-effective. Good for simple queries and high-volume usage.

Setting Up a Local LLM with Ollama

Ollama is the recommended way to run a local LLM for Hawkra. It provides a simple API server that is compatible with Hawkra's local LLM integration.

Option 1: Ollama on the Same Host

If you want to run Ollama alongside Hawkra on the same server, add it to your Docker Compose configuration:

services:
ollama:
image: ollama/ollama
container_name: hawkra-ollama
volumes:
- ollama_data:/root/.ollama
ports:
- "11434:11434"
restart: unless-stopped

volumes:
ollama_data:

After starting the Ollama container, pull a model:

docker exec hawkra-ollama ollama pull llama3

Then configure Hawkra: set LLM Mode to local and Local LLM Server to http://ollama:11434. If Ollama is on a different Docker network, use the host machine's IP address instead of the container name.

Option 2: Ollama on a Separate Machine

  1. Install Ollama on the target machine following the instructions at ollama.com/download.
  2. Pull a model: ollama pull llama3
  3. Ensure the Ollama server is accessible from your Hawkra server on port 11434.
  4. Configure Hawkra: set LLM Mode to local and Local LLM Server to http://<ollama-server-ip>:11434.
ModelSize
llama38B
llama3:70b70B
mistral7B
mixtral8x7B

Configuration via Environment Variables

SettingEnvironment Variable
LLM ModeLLM_MODE
Gemini API KeyGEMINI_API_KEY
Gemini ModelGEMINI_MODEL
Anthropic API KeyANTHROPIC_API_KEY
Anthropic ModelANTHROPIC_MODEL
Local LLM ServerLOCAL_LLM_SERVER