Documentation Index
Fetch the complete documentation index at: https://mintlify.com/koala73/worldmonitor/llms.txt
Use this file to discover all available pages before exploring further.
Overview
World Monitor integrates AI-powered analysis throughout the platform using a 4-tier provider fallback chain that prioritizes local compute and gracefully degrades through cloud APIs.Privacy-First Design: Local LLM support (Ollama/LM Studio) means intelligence analysis can run entirely on your hardware with zero data leaving your machine.
AI Summarization Chain
The World Brief and country briefs use a cascading provider system:Fallback Behavior
- Tier 1: Local LLM
- Tier 2: Groq
- Tier 3: OpenRouter
- Tier 4: Browser T5
Ollama / LM Studio
- Communicates via OpenAI-compatible
/v1/chat/completions - Auto-discovers available models from local instance
- Filters out embedding-only models
- Default model:
llama3.1:8b
Local inference is private by default - no API keys, no telemetry, no data leaves your machine.
Headline Deduplication
Before sending to any LLM, headlines are deduplicated:- Input: “Russian forces advance in Bakhmut” (Source A)
- Input: “Russian troops push forward in Bakhmut region” (Source B)
- Output: Single deduplicated headline
Redis Caching
All API-tier summaries are cached server-side:- Same headlines viewed by 1,000 users → 1 LLM call
- Instant results for cached queries
- Reduced API costs
- Better performance
Variant-Aware Prompting
System prompts adapt to the active dashboard variant:Language-Aware Output
When the UI language is non-English:LLM translation enables cross-language intelligence gathering - read sources in one language, get summaries in another.
Local Model Discovery
The desktop app automatically discovers available Ollama/LM Studio models:- If discovery fails, text input appears
- Enter model name directly
- Example:
llama3.1:8b,mistral:7b,codellama:13b
Threat Classification Pipeline
Every news item passes through a 3-stage hybrid classifier:- Stage 1: Keyword
- Stage 2: Browser ML
- Stage 3: LLM Classifier
Instant Pattern MatchingOutput:
- ~120 threat keywords organized by severity:
- Critical
- High
- Medium
- Low
- Info
- 14 event categories:
- conflict, protest, disaster, diplomatic, economic,
- terrorism, cyber, health, environmental, military,
- crime, infrastructure, tech, general
UI Never Blocks
Classification uses progressive enhancement:- News items render immediately with keyword classification
- ML results arrive within seconds, update UI
- LLM results arrive, override if more confident
- Each item shows
sourcetag:keyword,ml, orllm
Users never see a blank screen waiting for AI. Keyword results are instant, AI refinements layer on progressively.
Country Brief AI Analysis
Clicking any country opens a full intelligence dossier with AI-generated analysis:- Situation summary (2-3 paragraphs)
- Key developments
- Risk assessment
- Inline citation anchors
[1]–[8]that scroll to sources
Focal Point Detection
Correlates entities across multiple data streams:Trending Keyword Spike Detection
- 2x baseline: Minor spike
- 5x baseline: Major spike
- 10x baseline: Viral spike
Performance Optimizations
Timeout Cascade
Each tier has a 5-second timeout:Circuit Breaker
- Tracks error rates per provider
- Opens circuit after repeated failures
- Skips to next tier immediately
Desktop App Settings
Settings window (Cmd+,) has dedicated LLMs tab:- Saving in Settings writes to OS keychain
- Broadcasts localStorage change event
- Main window hot-reloads secrets
- No app restart required
API Key Storage
- Desktop App
- Web App
OS Keychain IntegrationReduces authorization prompts:
- macOS: Keychain Access
- Windows: Credential Manager
- Linux: Secret Service API
- Old: 20+ prompts (one per key)
- New: 1 prompt per launch
Browser-Side ML Worker
The ML worker runs in a separate Web Worker:- Toggle in AI Flow settings
- When disabled: Worker never initializes
- When enabled mid-session: Initializes immediately
- When disabled: Terminates worker
Disabling the browser model saves ~200MB of WebGL memory and eliminates ONNX model downloads.
Best Practices
Troubleshooting
Ollama not connecting?- Verify Ollama is running:
ollama serve - Check endpoint:
http://localhost:11434 - Test models available:
ollama list - Check CORS (desktop app handles automatically)
- Verify API keys are configured
- Check provider toggles enabled
- Look for errors in browser console
- Confirm internet connectivity (for cloud APIs)
- First request triggers LLM (slow)
- Subsequent requests instant (cached)
- Consider local Ollama for consistent speed
- Browser T5 is slowest but always works
Related Features
- Live News - AI classifies and summarizes news
- Desktop App - Local LLM integration
- Data Layers - AI enhances geographic correlation