Skip to main content

Troubleshoot OpenRAG

This page provides troubleshooting advice for issues you might encounter when using OpenRAG or contributing to OpenRAG.

Installation and start up issues

The following issues relate to OpenRAG installation and start up.

OpenSearch fails to start

Check that the value of the OPENSEARCH_PASSWORD environment variable meets the OpenSearch password complexity requirements.

If you need to change the password, you must reset the OpenRAG services.

OpenRAG fails to start from the TUI with operation not supported

This error occurs when starting OpenRAG with the TUI in WSL (Windows Subsystem for Linux).

The error occurs because OpenRAG is running within a WSL environment, so webbrowser.open() can't launch a browser automatically.

To access the OpenRAG application, open a web browser and enter http://localhost:3000 in the address bar.

OpenRAG installation fails with unable to get local issuer certificate

If you are installing OpenRAG on macOS, and the installation fails with unable to get local issuer certificate, run the following command, and then retry the installation:

open "/Applications/Python VERSION/Install Certificates.command"

Replace VERSION with your installed Python version, such as 3.13.

Application onboarding gets stuck on Google Chrome

If the OpenRAG onboarding process gets stuck when using Google Chrome, try clearing your browser's cache.

Langflow connection issues

Verify that the value of the LANGFLOW_SUPERUSER environment variable is correct. For more information about this variable and how this variable controls Langflow access, see Langflow settings.

Container out of memory errors

Increase your container VM's allocated memory, or use a CPU-only deployment to reduce memory usage.

For TUI-managed deployments, you can enable CPU mode on the TUI's Status page.

For self-managed deployments, CPU-only deployments use the docker-compose.yml file that doesn't have GPU overrides.

Memory issue with Podman on macOS

If you're using Podman on macOS, you might need to increase VM memory on your Podman machine. This example increases the machine size to 8 GB of RAM, which is the minimum recommended RAM for OpenRAG:

podman machine stop
podman machine rm
podman machine init --memory 8192 # 8 GB example
podman machine start

Port conflicts

With the default environment variable values, OpenRAG requires the following ports to be available on the host machine:

  • 3000: Langflow application
  • 5001: Docling local ingestion service
  • 5601: OpenSearch Dashboards
  • 7860: Docling UI
  • 8000: Docling API
  • 9200: OpenSearch service

OCR ingestion fails (easyocr not installed)

Docling ingestion can fail with an OCR-related error that mentions easyocr is missing. This is likely due to a stale uv cache when you install OpenRAG with uvx.

When you invoke OpenRAG with uvx openrag, uvx creates a cached, ephemeral environment that doesn't modify your project. The location and path of this cache depends on your operating system. For example, on macOS, this is typically a user cache directory, such as ~/.cache/uv.

This cache can become stale, producing errors like missing dependencies.

  1. If the TUI is open, press q to exit the TUI.

  2. Clear the uv cache:

    uv cache clean

    To clear the OpenRAG cache only, run:

    uv cache clean openrag
  3. Invoke OpenRAG to restart the TUI:

    uvx openrag
  4. Click Launch OpenRAG, and then retry document ingestion.

If you install OpenRAG with uv, dependencies are synced directly from your pyproject.toml file. This should automatically install easyocr because easyocr is included as a dependency in OpenRAG's pyproject.toml.

If you don't need OCR, you can disable OCR-based processing in your ingestion settings to avoid requiring easyocr.

Upgrade fails due to Langflow container already exists

If you encounter a langflow container already exists error when upgrading OpenRAG, this typically means you upgraded OpenRAG with uv, but you didn't remove or upgrade containers from a previous installation.

To resolve this issue, do the following:

  1. Remove only the Langflow container:

    1. Stop the Langflow container:

      Docker
      docker stop langflow
      Podman
      podman stop langflow
    2. Remove the Langflow container:

      Docker
      docker rm langflow --force
      Podman
      podman rm langflow --force
  2. Retry the upgrade.

  3. If reinstalling the Langflow container doesn't resolve the issue, then you must reset all containers or reinstall OpenRAG.

  4. Retry the upgrade.

    If no updates are available after reinstalling OpenRAG, then you reinstalled at the latest version, and your deployment is up to date.

Document ingestion or similarity search issues

See Troubleshoot ingestion.

Ollama model issues

OpenRAG isn't guaranteed to be compatible with all models that are available through Ollama. For example, some models might produce unexpected results, such as JSON-formatted output instead of natural language responses, and some models aren't appropriate for the types of tasks that OpenRAG performs, such as those that generate media.

The OpenRAG team recommends the following models when using Ollama as your model provider:

  • Language models: gpt-oss:20b or mistral-nemo:12b.

    If you choose gpt-oss:20b, consider using Ollama Cloud or running Ollama on a remote machine because this model requires at least 16GB of RAM.

  • Embedding models: nomic-embed-text:latest, mxbai-embed-large:latest, or embeddinggemma:latest.

You can experiment with other models, but if you encounter issues that you are unable to resolve through other RAG best practices (like context filters and prompt engineering), try switching to one of the recommended models. You can submit an OpenRAG GitHub issue to request support for specific models.

Chat issues

See Troubleshoot chat.