Skip to main content

DeepSeek Component

DeepSeek is an AI developer of large language models (LLM) focused on providing high performance models.

Component key: deepseek

Description

DeepSeek is an AI developer of large language models (LLM) focused on providing high performance models.

Use the component to create chat completions with available models.

API Documentation:

The component was built using the DeepSeek API Documentation.

Connections

API Key

  1. Login to DeepSeek and navigate to the API Keys section.
  2. Select Create New API Key and enter into the connection configuration of the integration.
InputDefaultNotesExample
API Key
string
/ Required
apiKey
DeepSeek API Key.
ASDB1234567890
Base URL
string
/ Required
Hidden Field
baseUrl
https://api.deepseek.com
The base URL of the DeepSeek API.
 

Data Sources

Select Model

A picklist of available models from the DeepSeek API. | key: selectModel | type: picklist

InputNotes
Connection
connection
/ Required
connection
 

Actions

Create Chat Completion

Creates a model response for the given chat conversation. | key: createChatCompletion

InputDefaultNotesExample
Connection
connection
/ Required
connection
 
 
 
Frequency Penalty
string
frequence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
2
Include Usage
boolean
include_usage
false
Only set this when you set stream: When true, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array. All other chunks will also include a usage field, but with a null value.
 
Should return Log Probabilities
boolean
log_probs
false
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
 
Max Tokens
string
max_tokens
Integer between 1 and 8192. The maximum number of tokens that can be generated in the chat completion.The total length of input tokens and generated tokens is limited by the model's context length. If max_tokens is not specified, the default value 4096 is used.
100
Messages
code
/ Required
messages
A list of messages comprising the conversation so far.
Model
string
/ Required
model
The model to use in order to generate the chat completion.
 
Presence Penalty
string
presence_penalty
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
2
Response Format
string
response_format
The format of the response. The response format can be either 'json_object' or 'text'.
json_object
Stop Sequence(s)
string
Value List
stop
The stop sequence(s) to use in order to generate the chat completion.
stop1,stop2
Stream
boolean
stream
false
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available, with the stream terminated by a data: [DONE] message.
 
Temperature
string
temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. [We generally recommend altering this or top_p but not both].
0.5
Tool Choice
string
tool_choice
Controls which (if any) tool is called by the model. using 'none' means the model will not call any tool and instead generates a message. using 'auto' means the model can pick between generating a message or calling one or more tools. using 'required' means the model must call one or more tools. Specifying a particular tool via {type: "function", function: {name: "my_function"}} forces the model to call that tool.none is the default when no tools are present. auto is the default if tools are present.
auto
Tools
code
tools
The tools to use in order to generate the chat completion.
Top Log Probabilities
string
top_logprobs
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
10
Top P
string
top_p
Number between 0 and 1. Higher values like 0.95 will make the output more random, while lower values like 0.05 will make it more focused and deterministic. [We generally recommend altering this or temperature but not both].
0.5

{
"data": {
"id": "930c60df-bf64-41c9-a88e-3ec75f81e00e",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hello! How can I help you today?",
"role": "assistant"
}
}
],
"created": 1705651092,
"model": "deepseek-chat",
"object": "chat.completion",
"usage": {
"completion_tokens": 10,
"prompt_tokens": 16,
"total_tokens": 26
}
}
}

List Models

Retrieves the currently available models, and provides basic information about each one such as the owner and availability. | key: listModels

InputNotes
Connection
connection
/ Required
connection
 

{
"data": {
"object": "list",
"data": [
{
"id": "deepseek-chat",
"object": "model",
"owned_by": "deepseek"
},
{
"id": "deepseek-reasoner",
"object": "model",
"owned_by": "deepseek"
}
]
}
}

Raw Request

Send a Raw Request to the DeepSeek API. | key: rawRequest

InputDefaultNotesExample
Connection
connection
/ Required
connection
 
 
 
Data
string
data
The HTTP body payload to send to the URL.
{"exampleKey": "Example Data"}
Debug Request
boolean
debugRequest
false
Enabling this flag will log out the current request.
 
File Data
string
Key Value List
fileData
File Data to be sent as a multipart form upload.
[{key: "example.txt", value: "My File Contents"}]
File Data File Names
string
Key Value List
fileDataFileNames
File names to apply to the file data inputs. Keys must match the file data keys above.
 
Form Data
string
Key Value List
formData
The Form Data to be sent as a multipart form upload.
[{"key": "Example Key", "value": new Buffer("Hello World")}]
Header
string
Key Value List
headers
A list of headers to send with the request.
User-Agent: curl/7.64.1
Max Retry Count
string
maxRetries
0
The maximum number of retries to attempt. Specify 0 for no retries.
 
Method
string
/ Required
method
The HTTP method to use.
 
Query Parameter
string
Key Value List
queryParams
A list of query parameters to send with the request. This is the portion at the end of the URL similar to ?key1=value1&key2=value2.
 
Response Type
string
/ Required
responseType
json
The type of data you expect in the response. You can request json, text, or binary data.
 
Retry On All Errors
boolean
retryAllErrors
false
If true, retries on all erroneous responses regardless of type. This is helpful when retrying after HTTP 429 or other 3xx or 4xx errors. Otherwise, only retries on HTTP 5xx and network errors.
 
Retry Delay (ms)
string
retryDelayMS
0
The delay in milliseconds between retries. This is used when 'Use Exponential Backoff' is disabled.
 
Timeout
string
timeout
The maximum time that a client will await a response to its request
2000
URL
string
/ Required
url
This is the URL to call.
/sobjects/Account
Use Exponential Backoff
boolean
useExponentialBackoff
false
Specifies whether to use a pre-defined exponential backoff strategy for retries. When enabled, 'Retry Delay (ms)' is ignored.