Sequentially sends multiple independent user messages (each potentially with a system message) to a specified LLM provider and model. Handles logging of raw responses and offers experimental batch processing via the OpenAI Batch API.
Usage
single_turns(
user_msgs,
system_msgs = NULL,
org = c("google", "anthropic", "openai"),
model = NULL,
temperature = 0,
max_tokens = 1024L,
timeout = 60,
log_jsonl = TRUE,
jsonl_file = NULL,
batch = FALSE,
...
)
Arguments
- user_msgs
Character vector. A vector of user messages/prompts. Required.
- system_msgs
Character string or vector, or NULL. System message(s). See Details for behavior. Default is
NULL
.- org
Character vector. The LLM provider. Defaults to "google". Handles partial matching. Allowed values: "google", "anthropic", "openai".
- model
Character string. The specific model ID. Defaults to
NULL
, triggering provider-specific defaults (seesingle_turn
).- temperature
Numeric. Sampling temperature (>= 0). Default is 0.0.
- max_tokens
Integer. Maximum tokens per response. Default is 1024L.
- timeout
Numeric. Request timeout per call in sequential mode, or for the batch creation call in batch mode. Default is 60.
- log_jsonl
Logical. Should the full JSON response for each successful call be appended to a JSONL file? Default is
TRUE
.- jsonl_file
Character string or
NULL
. Path to the JSONL file for logging. IfNULL
(default) andlog_jsonl
isTRUE
, a filename is generated automatically (e.g., "llm_calls_YYYYMMDD_HHMMSS.jsonl"). Ignored iflog_jsonl
isFALSE
or ifbatch = TRUE
.- batch
Logical. Use batch processing (currently OpenAI only)? Default
FALSE
.- ...
Additional arguments to be passed to
single_turn
(in sequential mode) or potentially used in batch preparation (currently unused).
Value
If
batch = FALSE
: A character vector of the same length asuser_msgs
, containing the extracted text responses.NA_character_
indicates an error occurred for that specific prompt during the API call or processing.If
batch = TRUE
andorg = "openai"
: The OpenAI batch job ID (character string).If
batch = TRUE
andorg != "openai"
: Stops with an error message.
Details
This function iterates through the provided user_msgs
prompts. For each prompt,
it determines the corresponding system message based on the system_msgs
argument
and calls the underlying single_turn
function.
System Prompt Handling:
If
system_msgs
isNULL
(default), no system message is used for any prompt.If
system_msgs
is a single character string, that string is used as the system message for all user prompts.If
system_msgs
is a character vector, it must be the same length asuser_msgs
.system_msgs[i]
will be used withuser_msgs[i]
.
Output and Logging:
By default (log_jsonl = TRUE
), the full JSON response from the API for each
successful call is appended to a JSONL file. If jsonl_file
is not provided,
a filename is automatically generated based on the timestamp. The path to the
JSONL file is printed to the console when logging is active. The primary return
value (when batch = FALSE
) is a character vector containing the extracted text
responses, with NA_character_
indicating failures for specific prompts.
Batch Processing (Experimental - OpenAI Only):
If batch = TRUE
and org = "openai"
, the function will:
Prepare a JSONL file suitable for the OpenAI Batch API.
Upload this file to OpenAI.
Create a batch processing job.
Print messages indicating the uploaded file ID and the created batch job ID.
Return the batch job ID as a character string. Note: The actual results of the batch job are not retrieved by this function; you will need to check the job status and download the results separately using the batch ID (functionality potentially added to this package later). Batch processing for other providers is not yet implemented.
Examples
if (FALSE) { # \dontrun{
# Ensure API keys are set
# Sys.setenv(GOOGLE_API_KEY = "YOUR_GOOGLE_KEY")
# Sys.setenv(OPENAI_API_KEY = "YOUR_OPENAI_KEY")
prompts <- c("What is R?", "Explain dplyr::mutate", "Why use version control?")
system_general <- "You are a helpful R programming assistant."
# --- Sequential Execution (Default) ---
# Using Google with default logging
responses_google <- single_turns(user_msgs = prompts, org = "google")
print(responses_google)
# Using OpenAI with a single system prompt and disabling logging
responses_openai <- single_turns(
user_msgs = prompts,
system_msgs = system_general,
org = "openai",
model = "gpt-4o-mini",
log_jsonl = FALSE
)
print(responses_openai)
# Using specific system prompts per user prompt
specific_system_msgs <- c("Explain like I'm 5", "Explain for data analyst", NA) # NA -> NULL system
responses_mixed_system_msgs <- single_turns(
user_msgs = prompts,
system_msgs = specific_system_msgs,
org = "openai"
)
print(responses_mixed_system_msgs)
# Specify a custom JSONL file location
my_log <- tempfile(fileext = ".jsonl")
responses_custom_log <- single_turns(
user_msgs = prompts[1:2],
org = "google",
jsonl_file = my_log
)
print(readLines(my_log))
unlink(my_log)
# --- Batch Execution (OpenAI Only Example) ---
# Note: This only *creates* the batch job. Results must be fetched later.
prompts_for_batch <- paste("Translate to French:", c("Hello", "Goodbye", "Thank you"))
batch_id <- single_turns(
user_msgs = prompts_for_batch,
org = "openai",
model = "gpt-4o-mini", # Ensure model supports batch if needed
batch = TRUE
)
if (!is.null(batch_id)) {
print(paste("OpenAI Batch job created with ID:", batch_id))
# You would later use batch_id to check status and get results
}
# Example of trying batch with non-OpenAI provider (will stop)
tryCatch({
single_turns(user_msgs = prompts, org = "google", batch = TRUE)
}, error = function(e) {
print(paste("Caught expected error:", e$message))
})
} # }