Skip to content

LLM Client

Using Local LLM (Ollama) with ClimAID

ClimAID includes built-in support for generating AI-powered reports using local language models via Ollama. This allows you to create detailed, scientific summaries of disease projections without requiring internet access or API keys.


Step 1 — Install Ollama

Before using the LLM features, you need to install Ollama on your system.

  1. Visit the official website: https://ollama.com/download

  2. Download and install the version for your operating system (Windows, macOS, or Linux).


Step 2 — Start the Ollama Server

After installation, start the Ollama service:

ollama serve

This will launch a local server at:

http://localhost:11434

ClimAID connects to this server to generate reports.


Step 3 — Download a Language Model

You must download at least one model before using the LLM.

Recommended:

ollama pull mistral

Other options include:

  • llama3 (strong reasoning)
  • phi3 (lightweight, faster)
  • mixtral (larger, more powerful)

Step 4 — Use with ClimAID

Once Ollama is running and a model is installed, you can use it directly:

from climaid.llm_client import LocalOllamaLLM

llm = LocalOllamaLLM(model="mistral")

Step 5 — Generate Reports

Pass the LLM client into the reporting pipeline:

report = model.generate_report(
    projection_summary=summary,
    llm_client=llm,
    open_browser=True
)

ClimAID will automatically:

  • Process projection outputs
  • Generate structured summaries
  • Use the LLM to create a readable scientific report

Common Issues

Ollama not running

Ollama is not running

Fix:

ollama serve

Model not found

Fix:

ollama pull mistral

Timeout error

  • Use a smaller model (phi3)
  • Reduce input size

Notes

  • Runs fully offline
  • No API keys required
  • Ideal for secure or research environments
  • Performance depends on your local machine

API Reference

Below is the full API for the LLM client:

Local LLM client using Ollama API (offline & free). Compatible with DiseaseReporter.

Source code in climaid\llm_client.py
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
class LocalOllamaLLM:
    """
    Local LLM client using Ollama API (offline & free).
    Compatible with DiseaseReporter.
    """

    def __init__(self, model="mistral", host="http://localhost:11434"):
        self.model = model
        self.host = host

    def generate(self, prompt: str) -> str:
        try:
            response = requests.post(
                f"{self.host}/api/generate",
                json={
                    "model": self.model,
                    "prompt": prompt,
                    "stream": False,
                    "options": {
                        "temperature": 0.2,  # more scientific
                        "num_predict": 3000,  # allow long reports
                    }
                },
                timeout=600  # <-- CRITICAL FIX (was 120)
            )
            response.raise_for_status()
            return response.json()["response"]

        except requests.exceptions.Timeout:
            raise RuntimeError(
                "Local LLM timed out. The model is too slow or the prompt is very large."
            )

        except requests.exceptions.ConnectionError:
            raise RuntimeError(
                "Ollama is not running. Start it using: 'ollama serve'"
            )