
chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, Gemini, Grok and GitHub Models).
Key features:
verbose = TRUE/FALSE)list_models()From CRAN:
install.packages("chatLLM")Development version:
# install.packages("remotes")  # if needed
remotes::install_github("knowusuboaky/chatLLM")Set your API keys or tokens once per session:
Sys.setenv(
  OPENAI_API_KEY     = "your-openai-key",
  GROQ_API_KEY       = "your-groq-key",
  ANTHROPIC_API_KEY  = "your-anthropic-key",
  DEEPSEEK_API_KEY   = "your-deepseek-key",
  DASHSCOPE_API_KEY  = "your-dashscope-key",
  GH_MODELS_TOKEN    = "your-github-models-token",
  GEMINI_API_KEY     = "your-gemini-key",
  XAI_API_KEY        = "your-grok-key"
)response <- call_llm(
  prompt     = "Who is Messi?",
  provider   = "openai",
  max_tokens = 300
)
cat(response)conv <- list(
  list(role    = "system",    content = "You are a helpful assistant."),
  list(role    = "user",      content = "Explain recursion in R.")
)
response <- call_llm(
  messages          = conv,
  provider          = "openai",
  max_tokens        = 200,
  presence_penalty  = 0.2,
  frequency_penalty = 0.1,
  top_p             = 0.95
)
cat(response)Suppress informational messages:
res <- call_llm(
  prompt      = "Tell me a joke",
  provider    = "openai",
  verbose     = FALSE
)
cat(res)Create a reusable LLM function:
# Build a “GitHub Models” engine with defaults baked in
GitHubLLM <- call_llm(
  provider    = "github",
  max_tokens  = 60,
  verbose     = FALSE
)
# Invoke it like a function:
story <- GitHubLLM("Tell me a short story about libraries.")
cat(story)# All providers at once
all_models <- list_models("all")
names(all_models)
# Only OpenAI models
openai_models <- list_models("openai")
head(openai_models)Pick from the list and pass it to call_llm():
anthro_models <- list_models("anthropic")
cat(call_llm(
  prompt     = "Write a haiku about autumn.",
  provider   = "anthropic",
  model      = anthro_models[1],
  max_tokens = 60
))n_tries /
backoff or supply a custom .post_func with
higher timeout().list_models("<provider>") or consult provider
docs.Issues and PRs welcome at https://github.com/knowusuboaky/chatLLM
MIT © Kwadwo Daddy Nyame Owusu - Boakye
Inspired by RAGFlowChainR, powered by httr and the R community. Enjoy!