R interface to Large Language Models with an OpenAi compatible API


output: github_document

knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/README-",
  out.width = "100%"
)

llmR: R Interface to Large Language Models

llmR is an R package that provides a seamless interface to various Large Language Models (LLMs) such as OpenAI’s GPT models, Azure’s language models, Google’s Gemini models, or custom local servers. It simplifies the process of sending prompts to these models and handling their responses, including rate limiting, retries, and error processing.

Features

  • Unified API: Setup and easily switch between different LLM providers and models using a consistent set of functions.
  • Prompt Processing: Convert chat messages into a standard format suitable for LLMs.
  • Output Processing: Can request JSON output from the LLMs and tries to sanitize the response if the parsing fails.
  • Error Handling: Automatically handle errors and retry requests when rate limits are exceeded. If a response is cut due to token limits, the package will ask the LLM to complete the response.
  • Custom Providers: Interrogate custom endpoints (local and online) and allow implementation of ad-hoc LLM connection functions.
  • Mock Calls: Allows simulation of LLM interactions for testing purposes.
  • Logging: Option to log the LLM response details for performance and cost monitoring.

Installation

You can install the development version of llmR from GitHub with:

# install.packages("remotes")
remotes::install_github("bakaburg1/llmR")

Usage

Direct Model Call

You can directly call an LLM by providing all necessary connection and authentication details within the prompt_llm function using the model_specification argument. This is useful for quick tests or when you don’t need to reuse model configurations.

Here’s a simple example of how to send a prompt to an LLM provider by passing a full model specification:

# Send a prompt by providing the full model specification
response <- prompt_llm(
  messages = "Hello there!",
  model_specification = list(
    provider = "openai", # Or "azure", "gemini", "custom"
    model = "gpt-4o",    # Specific model name
    api_key = "your_openai_api_key", # Your API key
    # Add other necessary parameters like 'endpoint' or 'api_version' if needed
    # e.g., for Azure:
    # endpoint = "https://your-resource-name.openai.azure.com",
    # api_version = "2024-06-01",
    # Or for a custom OpenAI-compatible server:
    # endpoint = "http://localhost:11434/v1/chat/completions"
    parameters = list(temperature = 0.7) # Optional parameters for the call
  )
)

cat(response) # cat() allows to get a formatted output

Storing and Reusing Model Configurations

For more convenient and organized usage, especially when working with multiple models or providers, llmR allows you to store model configurations and refer to them using a simple label.

Recording Model Configurations

You can use the record_llmr_model() function to store credentials and settings for various models. This is typically done once, perhaps in your user .Rprofile for availability across sessions.

These are examples of how to set up LLMs from various providers:

# In your .Rprofile file. You can edit it by using usethis::edit_r_profile()

# Set up OpenAI language model
llmR::record_llmr_model(
  # A label to identify the model specification to use with set_llmr_model()
  label = "openai",
  # The provider name which corresponds to different API structures
  provider = "openai",
  # Optional model name for endpoints with multiple models such the OpenAi one
  model = "gpt-4o",
  # Default parameters to be sent with the API request
  parameters = list(temperature = 0.5),
  # The API key to use for the provider
  api_key = "your_openai_api_key")

# Set up Azure language model
llmR::record_llmr_model(
    label = "gpt-4_azure",
    provider = "azure",
    # Some model providers require the endpoint to be specified
    endpoint = "https://your-resource-name.openai.azure.com",
    model = "your-deployment-id",
    api_key = "your_azure_api_key",
    # The api version is a parameter required for Azure hosted models
    api_version = "2024-06-01")

# Set up Google Gemini language model
llmR::record_llmr_model(
    label = "gemini",
    provider = "gemini",
    model = "gemini-1.5-pro",
    api_key = "your_gemini_api_key")

# Anthropic Claude model via "openrouter" as custom provider
llmR::record_llmr_model(
    label = "openrouter",
    provider = "custom",
    endpoint =  "https://openrouter.ai/api/v1/chat/completions",
    model = "anthropic/claude-3-opus-20240229",
    api_key = "your_openrouter_api_key")

# Mistral Large model via Azure
llmR::record_llmr_model(
    label = "Mistral-large-2407",
    provider = "custom",
    endpoint = "https://Mistral-large-2407-my_azure_resource.server_location.models.ai.azure.com/v1/chat/completions",
    api_key = "your_model_api_key")

# Example of local model served by Ollama
llmR::record_llmr_model(
  label = "ollama-llama3.1",
  provider = "custom", # For Ollama, provider is 'custom'
  endpoint = "http://localhost:11434/v1/chat/completions",
  model = "llama3.1", # Ensure this matches the model name in Ollama
  # No api_key is needed for local Ollama by default
)

# Choose your default model
llmR::set_llmr_model("openai")

Once models are recorded, you can use their labels directly in prompt_llm via the model_specification argument:

# Send a prompt using a recorded model's label
response <- prompt_llm(
  messages = "Tell me a joke.",
  model_specification = "openai" # Assuming 'openai' label was recorded
)

cat(response) # cat() allows to get a formatted output

Setting a Default Model for the Session

To avoid specifying the model in every prompt_llm call, for example if you want to use llmR in a script, you can set a default model for the current R session using set_llmr_model(label). The label must correspond to a model previously stored with record_llmr_model().

# Choose your default model for the session
llmR::set_llmr_model("openai") # Sets 'openai' as the default

# Now, prompt_llm will use the 'openai' configuration by default
response_default <- prompt_llm(
  messages = "Hello again!"
)

cat(response_default)

# You can still override the session default by providing model_specification
response_override <- prompt_llm(
  messages = "Explain quantum physics simply.",
  model_specification = "gemini" # Uses 'gemini' for this call only
)

cat(response_override)

Prompting the LLMs

The prompt_llm function accepts prompts in several formats:

  1. Single message: A single string message from the user.
  2. Named vector: A named vector where each element represents a message with a specified role.
  3. List of messages: A list where each element is a named vector with role and content.
  4. Batch prompting: A list of lists of messages for parallel processing (not fully tested for all providers).

Logging

You can enable logging to track the time taken for each request and the number of tokens sent and generated by setting the llmr_log_requests option to TRUE.

Prompt History

The llmR package provides functionality to store and manage the history of prompts and responses for later review. This is achieved through a session management system. Key functions include:

  • set_session_id(): Sets or generates a unique session ID.
  • store_llm_session_data(): Automatically called by prompt_llm() to store interaction data.
  • get_session_id(): Retrieves the current session ID.
  • get_session_data(): Retrieves stored data for a specific session or all sessions.
  • get_session_data_ids(): Returns a list of all stored session IDs.
  • remove_session_data(): Removes data for a specific session or all sessions.

Here’s an example of how to set a session ID, store interaction data, and retrieve it:

# Set a unique session ID
session_id <- set_session_id()

# Send a prompt and store the interaction data
response <- prompt_llm(messages = c(user = "Hello there!"))

# Retrieve the stored session data
session_data <- get_session_data(session_id)
cat(session_data)

Such tools can be useful for more advanced use such builiding chatbots or auditing your LLM usage.

By default, prompt_llm() automatically stores session data. You can review and analyze your interaction history using these functions, which is useful for debugging, improving prompts, or auditing your LLM usage.

Custom Providers

The package implements interfaces for “OpenAI”, “Azure OpenAI GPT”, and “Google Gemini” models, but you can also utilize other custom providers compatible with the “OpenAI” API specification or even your own LLM functions for providers with different API structures.

To create a custom provider function, name it with the pattern: use_<custom_provider>_llm. For example, use_myProvider_llm. The function should accept a body argument and return an httr response object.

Mock Calls

The package provides a way to simulate LLM responses for testing purposes. You can use the prompt_llm() function with “mock” as the label argument or by setting the llmr_llm_provider option to “mock”. The llmr_mock_response option can be set to a custom response that will be used by the mock functions.

Citation

If you use llmR in your research, please cite it as follows:

D'Ambrosio, A. (2024). llmR: R Interface to Large Language Models. URL: https://github.com/bakaburg1/llmR