Skip to main content

Quickstart

Use this quickstart to install OpenRAG, and then try some of OpenRAG's core features.

Prerequisites

  • Install Python version 3.13 or later.

Install OpenRAG

For this quickstart, install OpenRAG with the automatic installer script and basic setup. The script installs OpenRAG dependencies, including Docker or Podman, and then it installs and runs OpenRAG with uvx.

  1. Create a directory for your OpenRAG installation, and then change to that directory:

    mkdir openrag-workspace
    cd openrag-workspace
  2. Download the OpenRAG install script, move it to your OpenRAG directory, and then run it:

    bash run_openrag_with_prereqs.sh

    Wait while the installer script prepares your environment and installs OpenRAG. You might be prompted to install certain dependencies if they aren't already present in your environment.

    The entire process can take a few minutes. Once the environment is ready, the OpenRAG Terminal User Interface (TUI) starts.

    OpenRAG TUI Interface

  3. In the TUI, click Basic Setup.

  4. For Langflow Admin Password, click Generate Password to create a Langflow administrator password and username.

  5. Use the default values for all other fields.

  6. Click Save Configuration.

    Your OpenRAG configuration and passwords are stored in an OpenRAG .env file file that is created automatically at ~/.openrag/tui. OpenRAG container definitions are stored in the docker-compose files in the same directory.

  7. Click Start OpenRAG to start the OpenRAG services.

    This process can take some time while OpenRAG pulls and runs the container images. If all services start successfully, the TUI prints a confirmation message:

    Services started successfully
    Command completed successfully
  8. Click Close, and then click Launch OpenRAG to access the OpenRAG application and start the application onboarding process.

  9. For this quickstart, select the OpenAI model provider, enter your OpenAI API key, and then click Complete. Use the default settings for all other model options.

  10. Click through the overview slides for a brief introduction to OpenRAG, or click Skip overview. You can complete this quickstart without going through the overview. The overview demonstrates some basic functionality that is covered in the next section and in other parts of the OpenRAG documentation.

Load and chat with documents

Use the OpenRAG Chat to explore the documents in your OpenRAG database using natural language queries. Some documents are included by default to get you started, and you can load your own documents.

  1. In OpenRAG, click Chat.

  2. For this quickstart, ask the agent what documents are available. For example: What documents are available to you?

    The agent responds with a summary of OpenRAG's default documents.

  3. To verify the agent's response, click Knowledge to view the documents stored in the OpenRAG OpenSearch database. You can click a document to view the chunks of the document as they are stored in the database.

  4. Click Add Knowledge to add your own documents to your OpenRAG knowledge base.

    For this quickstart, use either the File or Folder upload options to load documents from your local machine. Folder uploads an entire directory. The default directory is ~/.openrag/documents.

    For information about the cloud storage provider options, see Ingest files with OAuth connectors.

  5. Return to the Chat window, and then ask a question related to the documents that you just uploaded.

    If the agent's response doesn't seem to reference your documents correctly, try the following:

    • Click Function Call: search_documents (tool_call) to view the log of tool calls made by the agent. This is helpful for troubleshooting because it shows you how the agent used particular tools.

    • Click Knowledge to confirm that the documents are present in the OpenRAG OpenSearch database, and then click each document to see how the document was chunked. If a document was chunked improperly, you might need to tweak the ingestion or modify and reupload the document.

    • Click Settings to modify the knowledge ingestion settings.

    For more information, see Configure knowledge and Ingest knowledge.

Change the language model and chat settings

  1. To change the knowledge ingestion settings, agent behavior, or language model, click Settings.

    The Settings page provides quick access to commonly used parameters like the Language model and Agent Instructions.

  2. For greater insight into the underlying Langflow flow that drives the OpenRAG chat, click Edit in Langflow and then click Proceed to launch the Langflow visual editor in a new browser window.

    If Langflow requests login information, enter the LANGFLOW_SUPERUSER and LANGFLOW_SUPERUSER_PASSWORD from the .env file at ~/.openrag/tui.

    The OpenRAG OpenSearch Agent flow opens in a new browser window.

    OpenRAG OpenSearch Agent flow

  3. For this quickstart, try changing the model. Click the Language Model component, and then change the Model Name to a different OpenAI model.

    After you edit a built-in flow, you can click Restore flow on the Settings page to revert the flow to its original state when you first installed OpenRAG.

  4. Press Command+S (Ctrl+S) to save your changes.

    You can close the Langflow browser window, or leave it open if you want to continue experimenting with the flow editor.

  5. Switch to your OpenRAG browser window, and then click in the Conversations tab to start a new conversation. This ensures that the chat doesn't persist any context from the previous conversation with the original model.

  6. Ask the same question you asked in Load and chat with documents to see how the response differs from the original model.

Integrate OpenRAG into an application

Langflow in OpenRAG includes pre-built flows that you can integrate into your applications using the Langflow API. You can use these flows as-is or modify them to better suit your needs, as demonstrated in Change the language model and chat settings.

You can send and receive requests with the Langflow API using Python, TypeScript, or curl.

  1. Open the OpenRAG OpenSearch Agent flow in the Langflow visual editor: From the Chat window, click Settings, click Edit in Langflow, and then click Proceed.

  2. Optional: If you don't want to use the Langflow API key that is generated automatically when you install OpenRAG, you can create a Langflow API key. This key doesn't grant access to OpenRAG; it is only for authenticating with the Langflow API.

    1. In the Langflow visual editor, click your user icon in the header, and then select Settings.
    2. Click Langflow API Keys, and then click Add New.
    3. Name your key, and then click Create API Key.
    4. Copy the API key and store it securely.
    5. Exit the Langflow Settings page to return to the visual editor.
  3. Click Share, and then select API access to get pregenerated code snippets that call the Langflow API and run the flow.

    These code snippets construct API requests with your Langflow server URL (LANGFLOW_SERVER_ADDRESS), the flow to run (FLOW_ID), required headers (LANGFLOW_API_KEY, Content-Type), and a payload containing the required inputs to run the flow, including a default chat input message.

    In production, you would modify the inputs to suit your application logic. For example, you could replace the default chat input message with dynamic user input.

    import requests
    import os
    import uuid

    api_key = 'LANGFLOW_API_KEY'
    url = "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID" # The complete API endpoint URL for this flow

    # Request payload configuration
    payload = {
    "output_type": "chat",
    "input_type": "chat",
    "input_value": "hello world!"
    }
    payload["session_id"] = str(uuid.uuid4())

    headers = {"x-api-key": api_key}

    try:
    # Send API request
    response = requests.request("POST", url, json=payload, headers=headers)
    response.raise_for_status() # Raise exception for bad status codes

    # Print response
    print(response.text)

    except requests.exceptions.RequestException as e:
    print(f"Error making API request: {e}")
    except ValueError as e:
    print(f"Error parsing response: {e}")
  4. Copy your preferred snippet, and then run it:

    • Python: Paste the snippet into a .py file, save it, and then run it with python filename.py.
    • TypeScript: Paste the snippet into a .ts file, save it, and then run it with ts-node filename.ts.
    • curl: Paste and run snippet directly in your terminal.

If the request is successful, the response includes many details about the flow run, including the session ID, inputs, outputs, components, durations, and more.

In production, you won't pass the raw response to the user in its entirety. Instead, you extract and reformat relevant fields for different use cases, as demonstrated in the Langflow quickstart. For example, you could pass the chat output text to a front-end user-facing application, and store specific fields in logs and backend data stores for monitoring, chat history, or analytics. You could also pass the output from one flow as input to another flow.

Next steps

  • Reinstall OpenRAG with your preferred settings: This quickstart used uvx and a minimal setup to demonstrate OpenRAG's core functionality. It is recommended that you reinstall OpenRAG with your preferred configuration and installation method.

  • Learn more about OpenRAG: Explore OpenRAG and the OpenRAG documentation to learn more about its features and functionality.

  • Learn more about Langflow: For a deep dive on the Langflow API and visual editor, see the Langflow documentation.