This function takes a ggplot2 object and generates a storytelling narrative focused on humanitarian insights. It uses the {ellmer} package to call a large or small language model from a supported provider.
Usage
generate_plot_story(
plot,
max_tokens = 30,
provider = NULL,
model = NULL,
clean_response = TRUE
)Arguments
- plot
A
ggplotobject from ggplot2.- max_tokens
Maximum number of tokens (approximate) for the narrative (default = 30).
- provider
Optional character string specifying the provider. Options include:
"openai","gemini","anthropic","ollama","azure". IfNULL, auto-detect from environment keys.- model
Optional character string specifying the model name. For Azure, this is typically the deployment name. If
NULL, a default model for the chosen provider will be used.- clean_response
Logical. Whether to clean the response by removing thinking tags and other artifacts (default = TRUE).
Details
If you do not have API keys or need to work offline, simply get ollama and look at top reasoning models - https://ollama.com/search?c=thinking
Setup:
Install {ellmer}:
install.packages("ellmer")Set your API key in your environment. For Azure OpenAI, use the standard OpenAI key variable:
Sys.setenv(OPENAI_API_KEY = "<YOUR_AZURE_OPENAI_KEY>")When using Azure, set
provider = "azure"and provide the env variablesSys.setenv(AZURE_OPENAI_ENDPOINT = "<YOUR_AZURE_ENDPOINT>")Sys.setenv(AZURE_OPENAI_API_VERSION = "<YOUR_AZURE_OPENAI_API_VERSION>")The best place to set this is in .Renviron, which you can easily edit by callingusethis::edit_r_environ()
Examples
library(ggplot2)
p <- ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
unhcrthemes::theme_unhcr(grid = "Y", axis = "X", axis_title = FALSE) +
labs(title = "Vehicle Efficiency",
subtitle = "Fuel consumption vs weight",
caption = "Source: mtcars dataset")
generate_plot_story(p, provider = "ollama", model = "deepseek-r1")
#> [1] "Key takeaway: Lighter vehicles consume less fuel. From the data: A car weighing only 1.5 tons achieves 30 mph efficiency, over ten points higher than heavier models around 4 tons. This pattern shows that even small weight reductions yield big efficiency winsa powerful message for funders investing to bolster program performance."
story <- generate_plot_story(p, provider = "azure", model = "gpt-4.1-mini", max_tokens = 300)
# To use as subtitle:
p + ggplot2::labs(subtitle = story)