Google Gemini Component
Google Gemini is an offering of advanced AI models developed by Google's DeepMind. Use the component to generate chats, images, and videos.
Component key: google-gemini
Description
Google Gemini is a family of advanced multimodal AI models developed by Google DeepMind.
Use the component to generate chats, images, and videos.
Connections
Google Gemini API
Navigate to Google AI Studio and generate an API key. Enter the key value into the connection configuration of the integration.
| Input | Notes | Example |
|---|---|---|
| API Key | Your Google AI Studio API key. Generate API keys here. | AIza... |
Vertex AI API
In order to Authenticate using Vertex:
- A Service Account is needed. One may be created in the Google Cloud Platform GCP Console from the IAM & Admin section.
- From the Service Account, use the Email value as the Client Email input value in the connection configuration.
- Add the following roles to the Service Account:
- Vertex AI User or Vertex AI Administrator
- Storage Object Viewer
- Once a Service Account is created, you will need to generate a Service Account Key
- Select the Service Account's options, navigate to the Key tab, and select Add Key to create a new key.
- After creating the key, you will be able to download a JSON file that contains the key information. This key contains sensible data and should be used with caution.
- Use the key downloaded in the previous step as the Private Key input value in the connection configuration.
- The top section of the console will show the current project. Select this to display all projects and *Project IDs**.
- Regions will be listed here or by navigating to Vertex AI Dashboard from the console.
- Enable the Vertex API by navigating to the Library section of APIs & Services. Search 'Vertex' and select enable for the Vertex AI API.
| Input | Notes | Example |
|---|---|---|
| Client Email | The email address of the client you would like to connect to. | |
| Private Key | The private key of the client you would like to connect to. | |
| Project ID | Your Google Cloud project ID. | my-project-123 |
| Region | The region to use for API requests. Get your region here. | us-central1 |
Data Sources
Select Model
Select a model from the list of available models. | key: selectModel | type: picklist
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. |
{
"result": [
{
"label": "models/embedding-gecko-001",
"key": "models/embedding-gecko-001"
},
{
"label": "models/gemini-pro-vision",
"key": "models/gemini-pro-vision"
}
]
}
Actions
Conversation
Sends a message to the chat. Optionally, historical messages can be provided to continue the chat. | key: sendMessage
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Extra Parameters | Extra parameters to pass to the API. | key1:value1,key2:value2 |
| Chat History | JSON string containing the chat history, you can use this parameter to give the model a context of the conversation. | |
| Max Output Tokens | Maximum number of tokens to generate in the response. | 1024 |
| Model Name | The name of the model to get information about (e.g., 'gemini-pro', 'gemini-pro-vision'). | gemini-pro |
| Prompt | The prompt you want to ask to the model. | Write a short story about a robot learning to paint |
| Safety Settings | JSON string defining safety settings for content generation. | |
| Temperature | Controls randomness in the output. Higher values (e.g., 0.8) make output more random, lower values (e.g., 0.2) make it more focused and deterministic. | 0.7 |
| Top K | Limits token selection to the K most likely next tokens. | 40 |
| Top P | Limits token selection to tokens with cumulative probability less than P. | 0.95 |
{
"data": {
"candidates": [
{
"content": {
"parts": [
{
"text": "Okay, let's break down what 'IA' could mean."
}
],
"role": "model"
},
"finishReason": "STOP",
"avgLogprobs": -0.28463075392904913
}
],
"modelVersion": "gemini-2.0-flash",
"usageMetadata": {
"promptTokenCount": 8,
"candidatesTokenCount": 1285,
"totalTokenCount": 1293,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 8
}
],
"candidatesTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 1285
}
]
}
}
}
Delete File
Deletes a file from the service. | key: deleteFile
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| File Name | The name of the file to delete. | test.txt |
{
"data": "Deleted successfully"
}
Generate Image
Generates an image using the Google Generative AI (Gemini) model. | key: generateImage
| Input | Notes | Example |
|---|---|---|
| Aspect Ratio | Aspect ratio of the generated media. | 16:9 |
| Connection | Select a Google Gemini connection. | |
| Extra Parameters | Extra parameters to pass to the API. | key1:value1,key2:value2 |
| Language | Language of the generated content. | en |
| Model Name | The name of the model to get information about (e.g., 'gemini-pro', 'gemini-pro-vision'). | gemini-pro |
| Number of Images | Number of images to generate. | 1 |
| Prompt | Text prompt that typically describes the images to output. | Write a short story about a robot learning to paint |
Generate Video
Generates a video using the Google Generative AI (Gemini) model. | key: generateVideo
| Input | Notes | Example |
|---|---|---|
| Aspect Ratio | Aspect ratio of the generated media. | 16:9 |
| Connection | Select a Google Gemini connection. | |
| Duration Seconds | Duration of the clip for video generation in seconds. | 10 |
| Extra Parameters | Extra parameters to pass to the API. | key1:value1,key2:value2 |
| FPS | FPS of the generated video. | 30 |
| Model Name | The name of the model to get information about (e.g., 'gemini-pro', 'gemini-pro-vision'). | gemini-pro |
| Number of Videos | Number of videos to generate. | 1 |
| Person Generation | Whether allow to generate person videos, and restrict to specific ages. | dont_allow |
| Prompt | Text prompt that typically describes the video to output. | Write a short story about a robot learning to paint |
| Resolution | Resolution of the generated video. | 1080p |
Get File
Retrieves the file information from the service. | key: getFile
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| File Name | The name of the file to get. | test.txt |
{
"data": {
"name": "files/ramp",
"displayName": "Ramp.png",
"mimeType": "image/png",
"sizeBytes": "3343",
"createTime": "2025-05-21T15:28:28.841883Z",
"expirationTime": "2025-05-23T15:28:28.807436986Z",
"updateTime": "2025-05-21T15:28:28.841883Z",
"sha256Hash": "Y2FiZDdjMDIyYTlmYjNkNDU2OGM3YmYwMmNmY2Q4ODliNDE5YWI2NzBjOTM4NDk5MmNkNzhkM2EzM2ZjNzM2Mw==",
"uri": "https://generativelanguage.googleapis.com/v1beta/files/ramp",
"state": "ACTIVE",
"source": "UPLOADED"
}
}
Get Model Info
Retrieves detailed information about a specific model from the Google Generative AI API. | key: getModelInfo
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Model Name | The name of the model to get information about (e.g., 'gemini-pro', 'gemini-pro-vision'). | gemini-pro |
{
"data": {
"name": "models/gemini-2.0-flash",
"displayName": "Gemini 2.0 Flash",
"description": "Gemini 2.0 Flash",
"version": "2.0",
"tunedModelInfo": {},
"inputTokenLimit": 1048576,
"outputTokenLimit": 8192,
"supportedActions": [
"generateContent",
"countTokens",
"createCachedContent",
"batchGenerateContent"
]
}
}
List Files
Lists all current project files from the service. | key: listFiles
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Fetch All | If true, fetch all items. | false |
| Page Size | The number of items to return per page. | 10 |
| Page Token | The page token to return. | 10 |
{
"data": [
{
"name": "files/ramp",
"displayName": "Ramp.png",
"mimeType": "image/png",
"sizeBytes": "3343",
"createTime": "2025-05-21T15:28:28.841883Z",
"expirationTime": "2025-05-23T15:28:28.807436986Z",
"updateTime": "2025-05-21T15:28:28.841883Z",
"sha256Hash": "Y2FiZDdjMDIyYTlmYjNkNDU2OGM3YmYwMmNmY2Q4ODliNDE5YWI2NzBjOTM4NDk5MmNkNzhkM2EzM2ZjNzM2Mw==",
"uri": "https://generativelanguage.googleapis.com/v1beta/files/ramp",
"state": "ACTIVE",
"source": "UPLOADED"
},
{
"name": "files/test",
"mimeType": "binary/octet-stream",
"sizeBytes": "65338",
"createTime": "2025-05-21T02:09:27.231980Z",
"expirationTime": "2025-05-23T02:09:27.177531840Z",
"updateTime": "2025-05-21T02:09:27.231980Z",
"sha256Hash": "NDMzZjUxYTAxMTNiY2QyYzZjMGE2OGRkYzEwMmJhMzk0MGMxZmI3NGZjY2ExMjQwOWVlNTVjOWZjODY3ODZlYg==",
"uri": "https://generativelanguage.googleapis.com/v1beta/files/test",
"state": "ACTIVE",
"source": "UPLOADED"
}
]
}
List Models
Retrieves a list of available models from the Google Generative AI API. | key: listModels
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Extra Parameters | Extra parameters to pass to the API. | key1:value1,key2:value2 |
| Fetch All | If true, fetch all items. | false |
| Filter | The filter to apply to the list. | name:gemini-1.5-pro |
| Page Size | The number of items to return per page. | 10 |
| Page Token | The page token to return. | 10 |
{
"data": [
{
"name": "models/gemini-2.0-flash",
"displayName": "Gemini 2.0 Flash",
"description": "Gemini 2.0 Flash",
"version": "2.0",
"tunedModelInfo": {},
"inputTokenLimit": 1048576,
"outputTokenLimit": 8192,
"supportedActions": [
"generateContent",
"countTokens",
"createCachedContent",
"batchGenerateContent"
]
}
]
}
Send Prompt
Send a prompt to the chat and provides a response. | key: generateText
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Extra Parameters | Extra parameters to pass to the API. | key1:value1,key2:value2 |
| Max Output Tokens | Maximum number of tokens to generate in the response. | 1024 |
| Model Name | The name of the model to get information about (e.g., 'gemini-pro', 'gemini-pro-vision'). | gemini-pro |
| Prompt | The text prompt to generate a response for. | Write a short story about a robot learning to paint |
| Safety Settings | JSON string defining safety settings for content generation. | |
| Temperature | Controls randomness in the output. Higher values (e.g., 0.8) make output more random, lower values (e.g., 0.2) make it more focused and deterministic. | 0.7 |
| Top K | Limits token selection to the K most likely next tokens. | 40 |
| Top P | Limits token selection to tokens with cumulative probability less than P. | 0.95 |
{
"data": {
"candidates": [
{
"content": {
"parts": [
{
"text": "The lighthouse keeper, Silas, was a man of routine."
}
],
"role": "model"
},
"finishReason": "STOP",
"avgLogprobs": -0.5446674455915178
}
],
"modelVersion": "gemini-2.0-flash",
"usageMetadata": {
"promptTokenCount": 6,
"candidatesTokenCount": 525,
"totalTokenCount": 531,
"promptTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 6
}
],
"candidatesTokensDetails": [
{
"modality": "TEXT",
"tokenCount": 525
}
]
}
}
}
Upload File
Uploads a file asynchronously to the Gemini API. | key: uploadFile
| Input | Notes | Example |
|---|---|---|
| Connection | Select a Google Gemini connection. | |
| Display Name | The display name of the file. | test.txt |
| File | The file to upload. | test.txt |
| File Name | The name of the file to get. | test.txt |
{
"data": {
"name": "files/ramp",
"displayName": "Ramp.png",
"mimeType": "image/png",
"sizeBytes": "3343",
"createTime": "2025-05-21T15:28:28.841883Z",
"expirationTime": "2025-05-23T15:28:28.807436986Z",
"updateTime": "2025-05-21T15:28:28.841883Z",
"sha256Hash": "Y2FiZDdjMDIyYTlmYjNkNDU2OGM3YmYwMmNmY2Q4ODliNDE5YWI2NzBjOTM4NDk5MmNkNzhkM2EzM2ZjNzM2Mw==",
"uri": "https://generativelanguage.googleapis.com/v1beta/files/ramp",
"state": "ACTIVE",
"source": "UPLOADED"
}
}
Changelog
2025-06-04
Added Vertex AI credential handling for improved authentication reliability.
2025-05-23
Initial release of the Google Gemini component for text generation and analysis.