ChatGPT support
The Botsquad platform has builtin support for ChatGPT and other large language models.
Prompt example¶
By creating a script of the GPT Prompt, it is possible to define one or more prompts that can be used at runtime in the bot.
A prompt file looks at minimum like this:
prompts:
- id: rhyme
text: |
make a sentence that rhymes with: {{ text }}
In bubblescript this exposes a constant called @prompts.rhyme
, that can then be used like this:
dialog main do
ask "Enter a sentence and I will make it rhyme for you"
_result = GPT.complete(@prompts.rhyme, text: answer.text)
say _result.text
end
resulting in a conversation like this:
bot: Enter a sentence and I will make it rhyme for you
user: I want to fly away!
bot: Today is Sunday Funday, let's go play!
Full prompt yaml¶
A full specification of a prompt yaml is this:
prompts:
- # exposed in bubblescript as @prompts.[id], so @prompts.summarize in this case:
id: summarize
# english label, used when the prompt is included in the CMS or Inbox widget
label: Summarize
# LLM provider, currently 'openai' is the only supported one.
provider: openai
# the LLM model to use. The `/v1/chat/completions` OpenAI endpoint is
# used to execute the prompt. For supported models see:
# https://platform.openai.com/docs/models/model-endpoint-compatibility
model: gpt-3.5-turbo
# A list of arbitrary string scopes. The strings 'cms_widget', 'cms_full'
# and 'inbox' are special: when they are specified in prompt scopes, the
# prompt shows up in the studio markdown editors in the CMS and / or the inbox.
scopes: [cms_widget]
# the actual text of the prompts. Can be a simple string or a $i18n structure, so
# that the prompt is translated for the conversation's locale. The prompt text is
# actually a Liquid template, so `{{ }}` bindings can be specified which then
# need to be passed in when calling `GPT.complete()`.
text:
$i18n: true
nl: |
system: Gegeven de volgende tekst, maak een korte en bondige samenvatting die
alleen de meest noodzakelijke punten teruggeeft. Gebruik hooguit 50
woorden:
user: {{text}}
en: |
system: Given the following text, create a short summary that only highlights the
most relevant parts of the text. Use at most 50 words:
user: {{text}}
# additional request parameters passed to the OpenAPI /v1/chat/completions endpoint
# See https://platform.openai.com/docs/api-reference/chat/create for possible values
endpoint_params:
temperature: 1.2
Executing prompts¶
By executing the GPT.complete(prompt, bindings)
function, a call to the
GPT API is done with the given prompt and its bindings. The prompt
argument typically comes from a constant defined in a prompt YAML file,
for instance @prompts.summarize
.
The bindings is a map or keyword list that needs to contain the bindings
that the prompt needs; in the summarize example only one binding is
created named text
. So a call to that prompt would be done like this:
_result = GPT.complete(@prompts.summarize, text: "this is a long article ...")
The full result of the GPT.complete
call is a map array which contains the following:
text
- The output text that GPT producedjson
- A JSON deserialized version of the text; the runtime detects whether JSON is available in the result and, if so, parses it. The JSON message itself can be padded with arbitrary other texts.usage
- The total tokens that were used for this API callrequest_time
- The nr of milliseconds this request tookraw
- The raw OpenAPI response
User / bot / assistant roles¶
The prompt text can contain user:
, assistant:
or system:
strings,
which will be used for determining the different parts of the prompt
(e.g. constructing the messages part of the OpenAPI request payload).
Automatic bindings¶
Some prompt bindings are done automatically.
In the case of Bubblescript GPT.complete
calls, the following bindings
are filled automatically:
locale
- The conversation's localetranscript
- The last 5 turns of the bot / user. This is typically used to make a generic chatbot that responds to the previous conversation in a natural way.bot
- The metadata of the bot, for instance{{ bot.title }}
is exposed.conversation
- The metadata of the conversation, for instance{{ bot.title }}
is exposed.
The
transcript
binding is an array binding and needs to be specified as[[ transcript ]]
so with square brackets, and on a line by itself!
Charging¶
For every GPT.complete call, a charge event (of type gpt.complete
) is
created and is taken into account in the customer's billing cycle.