
Send a Prompt Sequentially to Multiple LLM Perspectives (Text Output)
Source:R/polyphony.R
polyphony.RdExecutes a single user prompt sequentially against multiple Large Language Model
(LLM) configurations defined in the perspectives list. It leverages the
core single_turn function for individual API interactions, collecting only
the text responses. Provides verbose output on progress.
Usage
polyphony(
user,
perspectives,
system = NULL,
max_tokens = 1024L,
timeout = 60,
verbose = TRUE,
error_strategy = c("return_partial", "stop")
)Arguments
- user
Character string. The user prompt to send to all perspectives. Required.
- perspectives
A list where each element is itself a list defining an LLM configuration to query. Each inner list must contain:
id: A unique character string identifier for this perspective (used for naming the output).org: Character string specifying the LLM provider (e.g., "google", "openai", "anthropic"). Passed tosingle_turn.model: Character string specifying the model name for the provider. Passed tosingle_turn.
Inner lists can optionally contain other arguments accepted by
single_turn, such astemperature,max_tokens,timeout, etc. These will override the top-level defaults (max_tokens,timeout) if provided. Note: Anyoutput_formatorjsonl_filearguments within a perspective will be ignored, aspolyphonyforces text output.- system
Optional character string. A system prompt to be applied identically to all perspectives. Defaults to
NULL.- max_tokens
Integer. The default maximum number of tokens to generate. This is used if a perspective does not specify its own
max_tokens. Defaults to1024L.- timeout
Numeric. The default request timeout in seconds. This is used if a perspective does not specify its own
timeout. Defaults to60.- verbose
Logical. If
TRUE(default), prints status messages indicating which perspective is currently being processed.- error_strategy
Character string defining behavior when one or more perspectives encounter an error during the API call. Must be one of:
"return_partial"(default): Returns a list containing text results for successful perspectives and error objects for failed ones."stop": If any perspective fails, the entire function stops and throws an error, summarizing which perspectives failed.
Value
A named list. The names are the ids from the perspectives input list.
The values are:
If the call for that perspective was successful: The character string containing the LLM's text response (obtained via
single_turn(..., output_format = "text")).If the call failed and
error_strategy = "return_partial": Theerrorobject captured during the failedsingle_turncall.
Details
This function acts as a multiplexer, sending the same query sequentially to
different models/providers and collecting their textual responses. It relies
on the underlying single_turn function for handling the specifics of each
provider's API, authentication, and response parsing, ensuring that
output_format is always set to "text".
Authentication relies on API keys being available as environment variables,
as handled by single_turn (e.g., GOOGLE_API_KEY, OPENAI_API_KEY,
ANTHROPIC_API_KEY).
Examples
if (FALSE) { # \dontrun{
# Make sure the single_turn function is available (e.g., via devtools::load_all())
# Ensure API keys are set as environment variables:
# Sys.setenv(GOOGLE_API_KEY = "YOUR_GOOGLE_KEY")
# Sys.setenv(ANTHROPIC_API_KEY = "YOUR_ANTHROPIC_KEY")
# Sys.setenv(OPENAI_API_KEY = "YOUR_OPENAI_KEY")
# Define perspectives
perspectives_list <- list(
list(id = "gpt4o_mini", org = "openai", model = "gpt-4o-mini", temperature = 0.5),
list(id = "claude3h", org = "anthro", model = "claude-3-haiku-20240307"), # Partial org match
list(id = "gemini_flash", org = "google", model = "gemini-1.5-flash-latest", max_tokens = 500)
)
# --- Sequential Execution with Verbose Output (Default) ---
results_seq_verbose <- polyphony(
user = "Explain the concept of 'polyphony' in music.",
perspectives = perspectives_list,
system = "You are a helpful assistant."
)
print(results_seq_verbose)
# --- Sequential Execution without Verbose Output ---
results_seq_quiet <- polyphony(
user = "Explain the concept of 'polyphony' in music.",
perspectives = perspectives_list,
system = "You are a helpful assistant.",
verbose = FALSE
)
print(results_seq_quiet)
# --- Example with error handling ---
perspectives_with_error <- list(
list(id = "gpt4o_mini_ok", org = "openai", model = "gpt-4o-mini"),
list(id = "invalid_model", org = "openai", model = "non-existent-model-123")
)
# Stop on error
tryCatch({
polyphony(
user = "Hello",
perspectives = perspectives_with_error,
error_strategy = "stop"
)
}, error = function(e) {
message("Caught expected error: ", conditionMessage(e))
})
# Return partial results (verbose output will show processing for both)
results_partial <- polyphony(
user = "Hello",
perspectives = perspectives_with_error,
error_strategy = "return_partial"
)
print(results_partial)
# Check which ones failed
print(sapply(results_partial, inherits, "error"))
} # }