DeepInfra Querying Function

With the DeepInfra Querying function, you can easily query the DeepInfra API to use OpenAI compatible models such as meta-llama/Meta-Llama-3-8B-Instruct and optimize costs for your Voiceflow builds.

Guide:

Input Variables

License - Your FlowBridge Custom Function license key

Model - The DeepInfra model that is OpenAI compatible

Api Key - The API key for your DeepInfra account

System Prompt - The system prompt you want to include (You can set this to "manual" and change the prompt in the code)

User Prompt - The user prompt you want to include, generally your {last_utterence} variable (You can set this to "manual" and change the prompt in the code)

Max Tokens - The maximum tokens you want the model to use

Step by Step guide

Here is the URL for the official support page showcasing the steps visually: https://support.flowbridge.app/hc/en-us/articles/19033382413714-DeepInfra-Querying-Use-cheaper-open-source-models

  1. Navigate to the FlowBridge admin panel

  2. Click "Custom Functions"

  1. Click "Download" on DeepInfra Querying function

  1. Go to your Voiceflow Project

  2. Click "Functions"

  1. Click this icon to import the function code

  1. Click "Browse"

  1. Click "Import" after selecting the downloaded JSON file

  1. Click "Workflows"

  1. Double-click here.

  1. Click "Capture" and drag it in to the project

  1. Click here to connect the Start block and Capture block

  1. Click "Function" and drag it in to the project

  1. Click here to connect the capture block and function block

  1. Click on the function block and click on "Select existing function"

  1. Click "DeepInfra Querying (OpenAI Standard)"

  1. Go to the FlowBridge custom function page and copy your license

  2. Paste the license code here

  1. Click here.

  1. Write the number of maximum tokens you want the LLM to use

  2. Click here.

  1. In a new tab, navigate to https://deepinfra.com/

  2. Click "Models"

  1. Click this icon which will copy the chosen LLM model you want to use (Make sure its a OpenAI compatible model).

  1. Paste the model here

  1. Go back to DeepInfra, if you do not have an account yet, create one. If you do, click "Dashboard"

  1. Click "API Keys"

  1. Click "New API key"

  1. Click the "API Key name" field.

  1. Type what you want to label this API key

  2. Click "Generate API key"

  1. Click this icon to copy the API key.

  1. Paste the API key here

  1. Click here.

  1. Type you're system prompt, like "Be a pirate"

  2. Click here.

  1. Type "Say hello to me" or use the {last_utterence} variable to use the message from the user

  2. Click here.

  1. Type "{"

  2. Click "Create variable"

  1. Click the "Name" field.

  1. Type "deepinfra_response"

  2. Click "Create variable"

  1. Click here and drag a line somewhere

  1. Click "Text"

  1. Click here.

  1. Type "{deepinfra_response}" to show the response from the LLM to the user

Last updated