Skip to main content

Google Cloud BigQuery Component

BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time.

Component key: google-cloud-bigquery

Description

The Google Cloud BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time.

The Push Notifications service lets you to receive notifications that an order has been created. This is called "push" since Google will push notifications to you about events, such as orders, that happen on the Google side.

  • The Content API's pubsubnotificationsettings.update receives the request and sends you back a cloudTopicName.

  • To configure additional Topics

    • In the Google Cloud console, select the navigation menu scroll to the Pub/Sub page (Navigation Menu > More Products > Analytics > Pub/Sub)
    • In the Topics page, click Create Topic
      • In the window that opens, enter MyTopic in the Topic ID field.
        • Leave the default values for the remaining options, and then click Create.
        • You see the success message: A new topic and a new subscription have been successfully created.
        • You have just created a topic called MyTopic and an associated default subscription MyTopic-sub.
  • You create a subscription for the topic and register the URL push endpoint with Cloud Pub/Sub.

  • To Configure Subscription go to Pub/Sub > Subscriptions

    • In the Subscriptions page, click Create subscription.

    • Enter MySub in the Subscription ID field.

    • For Select a Cloud Pub/Sub topic, select the MyTopic topic from the drop-down menu

    • Leave the default values for the remaining options.

    • Click Create

      • You see the success message: Subscription successfully added.
    • Click the Topics page and click MyTopic.

      • The MySub subscription is now attached to the topic

        MyTopic. Pub/Sub delivers all messages sent to

        MyTopic to the MySub and MyTopic-sub subscriptions.

  • Cloud Pub/Sub accepts your subscription and associates that cloudTopicName with your URL. When messages are published to that cloudTopicName (for example, your order notifications), they will be sent to your URL push endpoint.

Request

PUT https://shoppingcontent.googleapis.com/content/v2.1/merchantId/pubsubnotificationsettings

Connections

Google Cloud BigQuery Private Key

InputDefaultNotesExample
Client Email
string
/ Required
clientEmail
The email address of the client you would like to connect.
someone@example.com
Private Key
text
/ Required
privateKey
The private key of the client you would like to connect.
 
Scopes
string
/ Required
Hidden Field
scopes
https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/cloud-platform.read-only
The email address of the client you would like to connect.
https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/cloud-platform.read-only

OAuth2

The Google BigQuery component authenticates requests through the Google Cloud Platform (GCP) OAuth 2.0 service. You'll need to create a GCP OAuth 2.0 app so your integration can authenticate and perform Google Drive tasks on your customers' behalf.

To create a Google Drive OAuth 2.0 app, first make sure you have a Google Developer account - you can sign up at https://console.cloud.google.com/. Then:

  1. Open up the Google Drive API Console
  2. Click CREATE PROJECT if you would like to create a new GCP project, or select an existing project.
  3. You will be prompted to enable Google BigQuery for your project. Click ENABLE.
  4. On the sidebar, select Credentials.
  5. An OAuth 2.0 app includes a "Consent Screen" (the page that asks "Do you want to allow (Your Company) to access Google Drive on your behalf?"). Click CONFIGURE CONSENT SCREEN.
    1. Your app will be externally available to your customers, so choose a User Type of External.
    2. Fill out the OAuth consent screen with an app name (your company or product's name), support email, app logo, domain, etc.
    3. You can ignore domains for now.
    4. On the next page, add these scopes to your app (these may not all be necessary, and should match the scopes you request in your connection definition in Prismatic):
      • https://www.googleapis.com/auth/bigquery
      • https://www.googleapis.com/auth/bigquery.insertdata
      • https://www.googleapis.com/auth/cloud-platform
      • https://www.googleapis.com/auth/cloud-platform.read-only
      • https://www.googleapis.com/auth/devstorage.full_control
      • https://www.googleapis.com/auth/devstorage.read_only
      • https://www.googleapis.com/auth/devstorage.read_write
    5. Enter some test users for your testing purposes. Your app will only work for those testing users until it is "verified" by Google. When you are ready for verification (they verify your privacy policy statement, etc), click PUBLISH APP on the OAuth consent screen. That'll allow your customers to authorize your integration to access their Google Drive.
  6. Once your "Consent Screen" is configured open the Credentials page from the sidebar again.
  7. Click +CREATE CREDENTIALS and select OAuth client ID.
    1. Under Application type select Web application.
    2. Under Authorized redirect URIs enter Prismatic's OAuth 2.0 callback URL: https://oauth2.prismatic.io/callback
    3. Click CREATE.
  8. Take note of the Client ID and Client Secret that are generated.

INFO Make sure to publish your OAuth 2.0 app after you've tested it so users outside of your test users can authorize your integration to interact with Google Drive on their behalf.

Now that you have a Client ID and Client Secret, add Google Drive step to your integration in Prismatic. Open the Configuration Wizard Designer by clicking Configuration Wizard, select your Google Drive Connection and enter your client ID and secret. You will probably want to keep the default Google BigQuery scopes:

https://www.googleapis.com/auth/bigqueryView and manage your data in Google BigQuery and see the email address for your Google Account
https://www.googleapis.com/auth/bigquery.insertdataInsert data into Google BigQuery
https://www.googleapis.com/auth/cloud-platformSee, edit, configure, and delete your Google Cloud data and see the email address for your Google Account.
https://www.googleapis.com/auth/cloud-platform.read-onlyView your data across Google Cloud services and see the email address of your Google Account
https://www.googleapis.com/auth/devstorage.full_controlManage your data and permissions in Cloud Storage and see the email address for your Google Account
https://www.googleapis.com/auth/devstorage.read_onlyView your data in Google Cloud Storage
https://www.googleapis.com/auth/devstorage.read_writeManage your data in Cloud Storage and see the email address of your Google Account
InputDefaultNotes
Authorize URL
string
/ Required
Hidden Field
authorizeUrl
https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&prompt=consent
The Authorization URL for Google BigQuery.
Client ID
string
/ Required
clientId
The Google BigQuery app's Client Identifier.
Client Secret
password
/ Required
clientSecret
The Google BigQuery app's Client Secret.
Scopes
string
/ Required
scopes
https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigquery.insertdata https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/cloud-platform.read-only https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/devstorage.read_only https://www.googleapis.com/auth/devstorage.read_write
Space delimited listing of scopes. https://developers.google.com/identity/protocols/oauth2/scopes#bigquery
Token URL
string
/ Required
Hidden Field
tokenUrl
https://oauth2.googleapis.com/token
The Token URL for Google BigQuery.

Triggers

PubSub Notification

PubSub Notification Trigger Settings | key: myTrigger


Data Sources

Fetch Projects Names

Fetch an array of projects names | key: projectsNames | type: picklist

InputNotes
Connection
connection
/ Required
connection
 

{
"result": [
{
"label": "John Locke",
"key": "650"
},
{
"label": "John Doe",
"key": "47012"
}
]
}

Fetch Tables Names

Fetch an array of tables names | key: tablesNames | type: picklist

InputNotes
Connection
connection
/ Required
connection
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

{
"result": [
{
"label": "John Locke",
"key": "650"
},
{
"label": "John Doe",
"key": "47012"
}
]
}

Actions

Cancel Job

Requests that a job be cancelled. | key: cancelJob

InputNotes
Connection
connection
/ Required
connectionInput
 
Job ID
string
/ Required
jobId
Job ID of the requested job.
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Create Dataset

Creates a new empty dataset. | key: createDataset

InputDefaultNotesExample
Access
code
access
Optional. An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER.
Connection
connection
/ Required
connectionInput
 
 
 
Creation Time
string
creationTime
Output only. The time when this dataset was created, in milliseconds since the epoch.
 
Dataset Reference
code
/ Required
datasetReference
A reference that identifies the dataset.
Default Collation
string
defaultCollation
Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type.
 
Default Encryption Configuration
code
defaultEncryptionConfiguration
The default encryption key for all tables in the dataset. Once this property is set, all newly-created partitioned tables in the dataset will have encryption key set to this value, unless table creation request (or query) overrides the key.
Default Partition Expiration (ms)
string
defaultPartitionExpirationMs
This default partition expiration, expressed in milliseconds. When new time-partitioned tables are created in a dataset where this property is set, the table will inherit this value, propagated as the TimePartitioning.expirationMs property on the new table. If you set TimePartitioning.expirationMs explicitly when creating a table, the defaultPartitionExpirationMs of the containing dataset is ignored. When creating a partitioned table, if defaultPartitionExpirationMs is set, the defaultTableExpirationMs value is ignored and the table will not be inherit a table expiration deadline.
 
Default Rounding Mode
string
defaultRoundingMode
Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified.
 
Default Table Expiration (ms)
string
defaultTableExpirationMs
Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration with a PATCH request, set to 0. Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
ETag
string
etag
Output only. A hash of the resource.
 
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
ID
string
id
Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field.
 
Is Case Insensitive
boolean
isCaseInsensitive
false
Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE, which means the dataset and its table names are case-sensitive. This field does not affect routine references.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Last Modified Time
string
lastModifiedTime
Output only. The date when this dataset was last modified, in milliseconds since the epoch.
 
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
 
Max Time Travel Hours
string
maxTimeTravelHours
Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Satisfies PZS
boolean
satisfiesPzs
false
Output only. Reserved for future use.
 
Self Link
string
selfLink
Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource.
 
Storage Billing Model
string
storageBillingModel
Optional. Updates storageBillingModel for the dataset.
 
Tags
code
tags
Output only. Tags for the Dataset.

Create Job

Starts a new asynchronous job. | key: createJob

InputNotesExample
Configuration
code
/ Required
configuration
Required. Describes the job configuration.
Connection
connection
/ Required
connectionInput
 
 
ETag
string
etag
Output only. A hash of the resource.
 
ID
string
id
Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field.
 
Job Reference
code
jobReference
Optional. Reference describing the unique-per-user name of the job.
Kind
string
kind
Output only. The resource type.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Self Link
string
selfLink
Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource.
 
Statistics
code
statistics
Output only. Information about the job, including starting time and ending time of the job.
Status
code
status
Output only. The status of this job. Examine this value when polling an asynchronous job to see if the job is complete.
User Email
string
userEmail
Output only. Email address of the user who ran the job.
 

Create Routine

Creates a new routine in the dataset. | key: createRoutine

InputDefaultNotesExample
Arguments
code
argument
Input/output argument of a function or a stored procedure.
Connection
connection
/ Required
connectionInput
 
 
 
Creation Time
string
creationTime
Output only. The time when this dataset was created, in milliseconds since the epoch.
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
 
Default Trial ID
string
/ Required
definitionBody
Required. The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, ' ', y)) The definitionBody is concat(x, ' ', y) ( is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return ' '; 'The definitionBody is return ' '; Note that both are replaced with linebreaks.
 
Description
string
description
Optional. The description of the routine, if defined.
 
Determinism Level
string
determinismLevel
Optional. The determinism level of the JavaScript UDF, if defined. One of DETERMINISM_LEVEL_UNSPECIFIED / DETERMINISTIC / NOT_DETERMINISTIC
 
ETag
string
etag
Output only. A hash of the resource.
 
Imported Libraries
string
Value List
importedLibraries
000xxx
Optional. If language = 'JAVASCRIPT', this field stores the path of the imported JAVASCRIPT libraries.
 
Language
string
language
Optional. Defaults to 'SQL' if remoteFunctionOptions field is absent, not set otherwise. One of LANGUAGE_UNSPECIFIED / SQL / JAVASCRIPT / PYTHON / JAVA / SCALA
 
Last Modified Time
string
lastModifiedTime
Output only. The date when this dataset was last modified, in milliseconds since the epoch.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Remote Function Options
code
remoteFunctionOptions
Optional. Remote function specific options.
Return Table Type
code
returnTableType
Optional. Can be set only if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return table type is inferred from definitionBody at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
Return Type
code
returnType
Optional if language = 'SQL'; required otherwise. Cannot be set if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return type is inferred from definitionBody at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time.
Routine Reference
code
/ Required
routineReference
Reference describing the ID of this routine.
Routine Type
string
/ Required
routineType
The type of routine. One of ROUTINE_TYPE_UNSPECIFIED / SCALAR_FUNCTION / PROCEDURE / TABLE_VALUED_FUNCTION
 
Spark Options
code
sparkOptions
Optional. Spark specific options.

Create Table

Creates a new, empty table in the dataset. | key: createTable

InputDefaultNotesExample
Clustering
code
clustering
Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
Connection
connection
/ Required
connectionInput
 
 
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the table to update.
 
Default Collation
string
defaultCollation
Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type.
 
Default Rounding Mode
string
defaultRoundingMode
Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
Encryption Configuration
code
encryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model.
Expiration Time
string
expirationTime
Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models.
 
External Data Configuration
code
externalDataConfiguration
Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Materialized View
code
materializedView
Optional. The materialized view definition.
Max Staleness
string
maxStaleness
Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior.
 
Project ID
string
/ Required
projectId
Project ID of the table to update.
 
Range Partitioning
code
rangePartitioning
If specified, configures range partitioning for this table.
Require Partition Filter
boolean
requirePartitionFilter
false
Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
 
Schema
code
schema
Optional. Describes the schema of this table.
Table Reference
code
/ Required
tableReference
Reference describing the ID of this routine.
Time Partitioning
code
timePartitioning
If specified, configures time-based partitioning for this table.
View
code
view
Optional. The view definition.

Delete Dataset

Deletes the dataset specified by the datasetId value. Before you can delete a dataset, you must delete all its tables, either manually or by specifying deleteContents. Immediately after deletion, you can create another dataset with the same name. | key: deleteDataset

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Delete Job

Requests the deletion of the metadata of a job. | key: deleteJob

InputNotes
Connection
connection
/ Required
connectionInput
 
Job ID
string
/ Required
jobId
Job ID of the requested job.
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Delete Model

Deletes the model specified by model ID from the dataset. | key: deleteModel

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Model ID
string
/ Required
modelId
Model ID of the requested model.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Delete Routine

Deletes the routine specified by routine ID from the dataset. | key: deleteRoutine

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Routine ID
string
/ Required
routineId
Routine ID of the requested routine.

Delete Table

Deletes the table specified by table ID from the dataset. | key: deleteTable

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the table to delete.
Project ID
string
/ Required
projectId
Project ID of the table to delete.
Table ID
string
/ Required
tableId
Table ID of the table to delete.

Get Dataset

Returns the dataset specified by datasetID. | key: getDataset

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Get Job

Returns information about a specific job. | key: getJob

InputNotes
Connection
connection
/ Required
connectionInput
 
Job ID
string
/ Required
jobId
Job ID of the requested job.
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Get Model

Gets the specified model resource by model ID. | key: getModel

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Model ID
string
/ Required
modelId
Model ID of the requested model.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Get Policy

Gets the access control policy for a resource. | key: getPolicy

InputNotesExample
Connection
connection
/ Required
connectionInput
 
 
Options
code
options
OPTIONAL: A GetPolicyOptions object for specifying options to tables.getIamPolicy.
Table ID
string
/ Required
resource
The resource for which the policy is being requested. See [Resource names](https://cloud.google.com/apis/design/resource_names) for the appropriate value for this field.
 

Get Query Job Results

Receives the results of a query job. | key: getQueryJobResult

InputNotes
Connection
connection
/ Required
connectionInput
 
Job ID
string
/ Required
jobId
Job ID of the requested job.
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Start Index
string
startIndex
Zero-based index of the starting row.
Timeout (ms)
string
timeoutMs
Optional. Optional: Specifies the maximum amount of time, in milliseconds, that the client is willing to wait for the query to complete. By default, this limit is 10 seconds (10,000 milliseconds). If the query is complete, the jobComplete field in the response is true. If the query has not yet completed, jobComplete is false. You can request a longer timeout period in the timeoutMs field. However, the call is not guaranteed to wait for the specified timeout; it typically returns after around 200 seconds (200,000 milliseconds), even if the query is not complete. If jobComplete is false, you can continue to wait for the query to complete by calling the getQueryResults method until the jobComplete field in the getQueryResults response is true.

Get Routine

Gets the specified routine resource by routine ID. | key: getRoutine

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Read Mask
string
readMask
If set, only the Routine fields in the field mask are returned in the response. If unset, all Routine fields are returned. This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'.
Routine ID
string
/ Required
routineId
Routine ID of the requested routine.

Get Service Account

Receives the service account for a project used for interactions with Google Cloud KMS | key: getServiceAccount

InputNotes
Connection
connection
/ Required
connectionInput
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

Get Table

Gets the specified table resource by table ID. | key: getTable

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested table.
Project ID
string
/ Required
projectId
Project ID of the requested table.
Selected Fields
string
selectedFields
tabledata.list of table schema fields to return (comma-separated). If unspecified, all fields are returned. A fieldMask cannot be used here because the fields will automatically be converted from camelCase to snake_case and the conversion will fail if there are underscores. Since these are fields in BigQuery table schemas, underscores are allowed.
Table ID
string
/ Required
tableId
Table ID of the requested table.
View
string
view
Optional. Specifies the view that determines which table information is returned. By default, basic table information and storage statistics (STORAGE_STATS) are returned. One of TABLE_METADATA_VIEW_UNSPECIFIED / BASIC / STORAGE_STATS / FULL

List Datasets

Lists all datasets in the specified project to which the user has been granted the READER dataset role. | key: listDatasets

InputDefaultNotes
All
boolean
all
false
Whether to list all datasets, including hidden ones
Connection
connection
/ Required
connectionInput
 
 
Filter
string
filter
An expression for filtering the results of the request by label. The syntax is 'labels.<name>[:<value>]'. Multiple filters can be ANDed together by connecting with a space. Example: 'labels.department:receiving labels.active'. See [Filtering datasets](https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels) using labels for details.
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

List Jobs

Lists all jobs that you started in the specified project. | key: listJobs

InputDefaultNotes
All Users
boolean
allUsers
false
Whether to display jobs owned by all users in the project. Default False.
Connection
connection
/ Required
connectionInput
 
 
Max Creation Time
string
maxCreationTime
Max value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created after or at this timestamp are returned.
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Min Creation Time
string
minCreationTime
Min value for job creation time, in milliseconds since the POSIX epoch. If set, only jobs created after or at this timestamp are returned.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Parent Job ID
string
parentJobId
If set, show only child jobs of the specified parent. Otherwise, show all top-level jobs.
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Projection
string
projection
Restrict information returned to a set of selected fields
State Filter
string
Value List
stateFilter
000xxx
Filter for job state, Valid values of this enum field are: DONE, PENDING, RUNNING

List Models

Lists all models in the specified dataset. Requires the READER dataset role. After retrieving the list of models, you can get information about a particular model by calling the models.get method. | key: listModels

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed

List Projects

Lists projects to which the user has been granted any project role. | key: listProjects

InputNotes
Connection
connection
/ Required
connectionInput
 
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results

List Routines

Lists all routines in the specified dataset. | key: listRoutines

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Filter
string
filter
An expression for filtering the results of the request by label. The syntax is 'labels.<name>[:<value>]'. Multiple filters can be ANDed together by connecting with a space. Example: 'labels.department:receiving labels.active'. See [Filtering datasets](https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels) using labels for details.
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Read Mask
string
readMask
If set, only the Routine fields in the field mask are returned in the response. If unset, all Routine fields are returned. This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'.

List Table Data

Lists the content of a table in rows. | key: listTableData

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
Selected Fields
string
selectedFields
Subset of fields to return, supports select into sub fields. Example: selectedFields = 'a,e.d.f';
Start Index
string
startIndex
Zero-based index of the starting row.
Table ID
string
/ Required
tableId
Table ID of the requested table

List Tables

Lists all tables in the specified dataset. | key: listTables

InputNotes
Connection
connection
/ Required
connectionInput
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the tables to list.
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
Page Token
string
pageToken
Page token, returned by a previous call, to request the next page of results
Project ID
string
/ Required
projectId
Project ID of the tables to list.

Patch Table

Patch information in an existing table. | key: patchTable

InputDefaultNotesExample
Clustering
code
clustering
Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
Connection
connection
/ Required
connectionInput
 
 
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the table to patch.
 
Default Collation
string
defaultCollation
Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type.
 
Default Rounding Mode
string
defaultRoundingMode
Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
Encryption Configuration
code
encryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model.
Expiration Time
string
expirationTime
Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models.
 
External Data Configuration
code
externalDataConfiguration
Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Materialized View
code
materializedView
Optional. The materialized view definition.
Max Staleness
string
maxStaleness
Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior.
 
Project ID
string
/ Required
projectId
Project ID of the table to patch.
 
Range Partitioning
code
rangePartitioning
If specified, configures range partitioning for this table.
Require Partition Filter
boolean
requirePartitionFilter
false
Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
 
Schema
code
schema
Optional. Describes the schema of this table.
Table ID
string
/ Required
tableId
Table ID of the table to patch.
 
Table Reference
code
/ Required
tableReference
Reference describing the ID of this routine.
Time Partitioning
code
timePartitioning
If specified, configures time-based partitioning for this table.
View
code
view
Optional. The view definition.

Query Job

Runs a BigQuery SQL query synchronously and returns query results if the query completes within a specified timeout. | key: queryJob

InputDefaultNotesExample
Connection
connection
/ Required
connectionInput
 
 
 
Connection Properties
code
connectionProperties
Optional. Connection properties which can modify the query behavior.
Create Session
boolean
createSession
false
Optional. If true, creates a new session using a randomly generated sessionId. If false, runs query with an existing sessionId passed in ConnectionProperty, otherwise runs query in non-session mode. The session location will be set to QueryRequest.location if it is present, otherwise it's set to the default location based on existing routing logic.
 
Default Dataset
code
defaultDataset
Optional. Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.
Dry Run
boolean
dryRun
false
Optional. If set to true, BigQuery doesn't run the job. Instead, if the query is valid, BigQuery returns statistics about the job such as how many bytes would be processed. If the query is invalid, an error returns. The default value is false.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
 
Maximum Bytes Billed
string
maximumBytesBilled
Optional. Limits the bytes billed for this query. Queries with bytes billed above this limit will fail (without incurring a charge). If unspecified, the project default is used.
 
Max Results
string
maxResults
The maximum number of results to return in a single response page. Leverage the page tokens to iterate through the entire collection.
 
Parameter Mode
string
parameterMode
GoogleSQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Query
string
/ Required
query
Required. A query string to execute, using Google Standard SQL or legacy SQL syntax. Example: 'SELECT COUNT(f1) FROM myProjectId.myDatasetId.myTableId'.
 
Default Dataset
code
queryParameters
Optional. Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.
Request ID
string
requestId
Optional. A unique user provided identifier to ensure idempotent behavior for queries. Note that this is different from the jobId. It has the following properties: It is case-sensitive, limited to up to 36 ASCII characters. A UUID is recommended. Read only queries can ignore this token since they are nullipotent by definition. For the purposes of idempotency ensured by the requestId, a request is considered duplicate of another only if they have the same requestId and are actually duplicates. When determining whether a request is a duplicate of another request, all parameters in the request that may affect the result are considered. For example, query, connectionProperties, queryParameters, useLegacySql are parameters that affect the result and are considered when determining whether a request is a duplicate, but properties like timeoutMs don't affect the result and are thus not considered. Dry run query requests are never considered duplicate of another request. When a duplicate mutating query request is detected, it returns: a. the results of the mutation if it completes successfully within the timeout. b. the running operation if it is still in progress at the end of the timeout. Its lifetime is limited to 15 minutes. In other words, if two requests are sent with the same requestId, but more than 15 minutes apart, idempotency is not guaranteed.
 
Timeout (ms)
string
timeoutMs
Optional. Optional: Specifies the maximum amount of time, in milliseconds, that the client is willing to wait for the query to complete. By default, this limit is 10 seconds (10,000 milliseconds). If the query is complete, the jobComplete field in the response is true. If the query has not yet completed, jobComplete is false. You can request a longer timeout period in the timeoutMs field. However, the call is not guaranteed to wait for the specified timeout; it typically returns after around 200 seconds (200,000 milliseconds), even if the query is not complete. If jobComplete is false, you can continue to wait for the query to complete by calling the getQueryResults method until the jobComplete field in the getQueryResults response is true.
 
Use Legacy SQL
boolean
useLegacySql
false
Specifies whether to use BigQuery's legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery's GoogleSQL: https://cloud.google.com/bigquery/sql-reference/ When useLegacySql is set to false, the value of flattenResults is ignored; query will be run as if flattenResults is false.
 
Use Query Cache
boolean
useQueryCache
false
Optional. Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. The default value is true.
 

Raw Request

Send raw HTTP request to Google Cloud BigQuery | key: rawRequest

InputDefaultNotesExample
Connection
connection
/ Required
connection
 
 
 
Data
string
data
The HTTP body payload to send to the URL.
{"exampleKey": "Example Data"}
Debug Request
boolean
debugRequest
false
Enabling this flag will log out the current request.
 
File Data
string
Key Value List
fileData
File Data to be sent as a multipart form upload.
[{key: "example.txt", value: "My File Contents"}]
File Data File Names
string
Key Value List
fileDataFileNames
File names to apply to the file data inputs. Keys must match the file data keys above.
 
Form Data
string
Key Value List
formData
The Form Data to be sent as a multipart form upload.
[{"key": "Example Key", "value": new Buffer("Hello World")}]
Header
string
Key Value List
headers
A list of headers to send with the request.
User-Agent: curl/7.64.1
Max Retry Count
string
maxRetries
0
The maximum number of retries to attempt.
 
Method
string
/ Required
method
The HTTP method to use.
 
Query Parameter
string
Key Value List
queryParams
A list of query parameters to send with the request. This is the portion at the end of the URL similar to ?key1=value1&key2=value2.
 
Response Type
string
/ Required
responseType
json
The type of data you expect in the response. You can request json, text, or binary data.
 
Retry On All Errors
boolean
retryAllErrors
false
If true, retries on all erroneous responses regardless of type.
 
Retry Delay (ms)
string
retryDelayMS
0
The delay in milliseconds between retries.
 
Timeout
string
timeout
The maximum time that a client will await a response to its request
2000
URL
string
/ Required
url
Input the path only (/projects/{projectId}/jobs), The base URL is already included (https://bigquery.googleapis.com/bigquery/{version}). For example, to connect to https://bigquery.googleapis.com/bigquery/v2/projects/{projectId}/jobs, only /projects/{projectId}/jobs is entered in this field.
/projects/{projectId}/jobs
Use Exponential Backoff
boolean
useExponentialBackoff
false
Specifies whether to use a pre-defined exponential backoff strategy for retries.
 
API Version
string
version
v2
The API version to use. This is used to construct the base URL for the request.
 

Set Policy

Sets the access control policy on the specified resource. | key: setPolicy

InputNotesExample
Connection
connection
/ Required
connectionInput
 
 
Policy
code
policy
The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them.
Table ID
string
/ Required
resource
The resource for which the policy is being requested. See [Resource names](https://cloud.google.com/apis/design/resource_names) for the appropriate value for this field.
 
Update Mask
string
updateMask
OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only the fields in the mask will be modified. If no mask is provided, the following default mask is used: paths: 'bindings, etag' This is a comma-separated list of fully qualified names of fields. Example: 'user.displayName,photo'.
 

Table Data Insert All

Streams data into BigQuery one record at a time without needing to run a load job. | key: tableDataInsertAll

InputDefaultNotesExample
Connection
connection
/ Required
connectionInput
 
 
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
 
Ignore Unknown Values
boolean
ignoreUnknownValues
false
Optional. Accept rows that contain values that do not match the schema. The unknown values are ignored. Default is false, which treats unknown values as errors.
 
Kind
string
kind
Output only. The resource type.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Rows
code
rows
The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Google Cloud services (such as Projects) might reject them.
Skip Invalid Rows
boolean
skipInvalidRows
false
Optional. Insert all valid rows of a request, even if invalid rows exist. The default value is false, which causes the entire request to fail if any invalid rows exist.
 
Table ID
string
/ Required
tableId
Table ID of the requested table
 
Template Suffix
string
templateSuffix
Optional. If specified, treats the destination table as a base template, and inserts the rows into an instance table named '{destination}{templateSuffix}'. BigQuery will manage creation of the instance table, using the schema of the base template table. See https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables for considerations when working with templates tables.
 

Update Dataset

Updates information in an existing dataset. The update method replaces the entire dataset resource, whereas the patch method only replaces fields that are provided in the submitted dataset resource. | key: updateDataset

InputDefaultNotesExample
Access
code
access
Optional. An array of objects that define dataset access for one or more entities. You can set this property when inserting or updating a dataset in order to control who is allowed to access the data. If unspecified at dataset creation time, BigQuery adds default dataset access for the following entities: access.specialGroup: projectReaders; access.role: READER; access.specialGroup: projectWriters; access.role: WRITER; access.specialGroup: projectOwners; access.role: OWNER; access.userByEmail: [dataset creator email]; access.role: OWNER.
Connection
connection
/ Required
connectionInput
 
 
 
Creation Time
string
creationTime
Output only. The time when this dataset was created, in milliseconds since the epoch.
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
 
Dataset Reference
code
/ Required
datasetReference
A reference that identifies the dataset.
Default Collation
string
defaultCollation
Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type.
 
Default Encryption Configuration
code
defaultEncryptionConfiguration
The default encryption key for all tables in the dataset. Once this property is set, all newly-created partitioned tables in the dataset will have encryption key set to this value, unless table creation request (or query) overrides the key.
Default Partition Expiration (ms)
string
defaultPartitionExpirationMs
This default partition expiration, expressed in milliseconds. When new time-partitioned tables are created in a dataset where this property is set, the table will inherit this value, propagated as the TimePartitioning.expirationMs property on the new table. If you set TimePartitioning.expirationMs explicitly when creating a table, the defaultPartitionExpirationMs of the containing dataset is ignored. When creating a partitioned table, if defaultPartitionExpirationMs is set, the defaultTableExpirationMs value is ignored and the table will not be inherit a table expiration deadline.
 
Default Rounding Mode
string
defaultRoundingMode
Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified.
 
Default Table Expiration (ms)
string
defaultTableExpirationMs
Optional. The default lifetime of all tables in the dataset, in milliseconds. The minimum lifetime value is 3600000 milliseconds (one hour). To clear an existing default expiration with a PATCH request, set to 0. Once this property is set, all newly-created tables in the dataset will have an expirationTime property set to the creation time plus the value in this property, and changing the value will only affect new tables, not existing ones. When the expirationTime for a given table is reached, that table will be deleted automatically. If a table's expirationTime is modified or removed before the table expires, or if you provide an explicit expirationTime when creating a table, that value takes precedence over the default expiration time indicated by this property.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
ETag
string
etag
Output only. A hash of the resource.
 
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
ID
string
id
Output only. The fully-qualified unique name of the dataset in the format projectId:datasetId. The dataset name without the project name is given in the datasetId field. When creating a new dataset, leave this field blank, and instead specify the datasetId field.
 
Is Case Insensitive
boolean
isCaseInsensitive
false
Optional. TRUE if the dataset and its table names are case-insensitive, otherwise FALSE. By default, this is FALSE, which means the dataset and its table names are case-sensitive. This field does not affect routine references.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Last Modified Time
string
lastModifiedTime
Output only. The date when this dataset was last modified, in milliseconds since the epoch.
 
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
 
Max Time Travel Hours
string
maxTimeTravelHours
Optional. Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days). The default value is 168 hours if this is not set.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Satisfies PZS
boolean
satisfiesPzs
false
Output only. Reserved for future use.
 
Self Link
string
selfLink
Output only. A URL that can be used to access the resource again. You can use this URL in Get or Update requests to the resource.
 
Storage Billing Model
string
storageBillingModel
Optional. Updates storageBillingModel for the dataset.
 
Tags
code
tags
Output only. Tags for the Dataset.

Update Model

Patch specific fields in the specified model. | key: updateModel

InputDefaultNotesExample
Connection
connection
/ Required
connectionInput
 
 
 
Creation Time
string
creationTime
Output only. The time when this dataset was created, in milliseconds since the epoch.
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
 
Default Trial ID
string
defaultTrialId
Output only. The default trialId to use in TVFs when the trialId is not passed in. For single-objective hyperparameter tuning models, this is the best trial ID. For multi-objective hyperparameter tuning models, this is the smallest trial ID among all Pareto optimal trials.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
Encryption Configuration
code
encryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model.
ETag
string
etag
Output only. A hash of the resource.
 
Expiration Time
string
expirationTime
Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models.
 
Feature Columns
code
featureColumns
Output only. Input feature columns for the model inference. If the model is trained with TRANSFORM clause, these are the input of the TRANSFORM clause.
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
Hparam Search Spaces
code
hparamSearchSpaces
Output only. Trials of a hyperparameter tuning model sorted by trialId.
Hparam Trials
code
hparamTrials
Output only. Trials of a hyperparameter tuning model sorted by trialId.
Label Columns
code
labelColumns
Output only. Label columns that were used to train this model. The output of the model will have a 'predicted_' prefix to these columns.
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Last Modified Time
string
lastModifiedTime
Output only. The date when this dataset was last modified, in milliseconds since the epoch.
 
Location
string
location
The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations.
 
Model ID
string
/ Required
modelId
Model ID of the requested model.
 
Model Reference
code
/ Required
modelReference
Unique identifier for this model.
Model Type
string
modelType
Output only. Type of the model resource.
 
Optimal Trial IDs
string
Value List
optimalTrialIds
000xxx
Output only. For single-objective hyperparameter tuning models, it only contains the best trial. For multi-objective hyperparameter tuning models, it contains all Pareto optimal trials sorted by trialId.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Training Runs
code
trainingRuns
Information for all training runs in increasing order of startTime.

Update Routine

Updates information in an existing routine. | key: updateRoutine

InputDefaultNotesExample
Arguments
code
argument
Input/output argument of a function or a stored procedure.
Connection
connection
/ Required
connectionInput
 
 
 
Creation Time
string
creationTime
Output only. The time when this dataset was created, in milliseconds since the epoch.
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the requested dataset
 
Default Trial ID
string
/ Required
definitionBody
Required. The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement: CREATE FUNCTION JoinLines(x string, y string) as (concat(x, ' ', y)) The definitionBody is concat(x, ' ', y) ( is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement: CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return ' '; 'The definitionBody is return ' '; Note that both are replaced with linebreaks.
 
Description
string
description
Optional. The description of the routine, if defined.
 
Determinism Level
string
determinismLevel
Optional. The determinism level of the JavaScript UDF, if defined. One of DETERMINISM_LEVEL_UNSPECIFIED / DETERMINISTIC / NOT_DETERMINISTIC
 
ETag
string
etag
Output only. A hash of the resource.
 
Imported Libraries
string
Value List
importedLibraries
000xxx
Optional. If language = 'JAVASCRIPT', this field stores the path of the imported JAVASCRIPT libraries.
 
Language
string
language
Optional. Defaults to 'SQL' if remoteFunctionOptions field is absent, not set otherwise. One of LANGUAGE_UNSPECIFIED / SQL / JAVASCRIPT / PYTHON / JAVA / SCALA
 
Last Modified Time
string
lastModifiedTime
Output only. The date when this dataset was last modified, in milliseconds since the epoch.
 
Project ID
string
/ Required
projectId
Project ID of the datasets to be listed
 
Remote Function Options
code
remoteFunctionOptions
Optional. Remote function specific options.
Return Table Type
code
returnTableType
Optional. Can be set only if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return table type is inferred from definitionBody at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
Return Type
code
returnType
Optional if language = 'SQL'; required otherwise. Cannot be set if routineType = 'TABLE_VALUED_FUNCTION'. If absent, the return type is inferred from definitionBody at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time.
Routine Reference
code
/ Required
routineReference
Reference describing the ID of this routine.
Routine Type
string
/ Required
routineType
The type of routine. One of ROUTINE_TYPE_UNSPECIFIED / SCALAR_FUNCTION / PROCEDURE / TABLE_VALUED_FUNCTION
 
Spark Options
code
sparkOptions
Optional. Spark specific options.

Update Table

Updates information in an existing table. | key: updateTable

InputDefaultNotesExample
Clustering
code
clustering
Clustering specification for the table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
Connection
connection
/ Required
connectionInput
 
 
 
Dataset ID
string
/ Required
datasetId
Dataset ID of the table to update.
 
Default Collation
string
defaultCollation
Optional. The maximum staleness of data that could be returned when the table (or stale MV) is queried. Staleness encoded as a string encoding of sql IntervalValue type.
 
Default Rounding Mode
string
defaultRoundingMode
Optional. Defines the default rounding mode specification of new tables created within this dataset. During table creation, if this field is specified, the table within this dataset will inherit the default rounding mode of the dataset. Setting the default rounding mode on a table overrides this option. Existing tables in the dataset are unaffected. If columns are defined during that table creation, they will immediately inherit the table's default rounding mode, unless otherwise specified.
 
Description
string
description
Optional. A descriptive name for the dataset.
 
Encryption Configuration
code
encryptionConfiguration
Custom encryption configuration (e.g., Cloud KMS keys). This shows the encryption configuration of the model data while stored in BigQuery storage. This field can be used with models.patch to update encryption key for an already encrypted model.
Expiration Time
string
expirationTime
Optional. The time when this model expires, in milliseconds since the epoch. If not present, the model will persist indefinitely. Expired models will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created models.
 
External Data Configuration
code
externalDataConfiguration
Optional. Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
Friendly Name
string
friendlyName
Optional. A descriptive name for the dataset.
 
Kind
string
kind
Output only. The resource type.
 
Labels
code
labels
The labels associated with this dataset. You can use these to organize and group your datasets. You can set this property when inserting or updating a dataset. See Creating and Updating Dataset Labels for more information.
Materialized View
code
materializedView
Optional. The materialized view definition.
Max Staleness
string
maxStaleness
Optional. Defines the default collation specification of future tables created in the dataset. If a table is created in this dataset without table-level default collation, then the table inherits the dataset default collation, which is applied to the string fields that do not have explicit collation specified. A change to this field affects only tables created afterwards, and does not alter the existing tables. The following values are supported: 'und:ci': undetermined locale, case insensitive.'' empty string. Default to case-sensitive behavior.
 
Project ID
string
/ Required
projectId
Project ID of the table to update.
 
Range Partitioning
code
rangePartitioning
If specified, configures range partitioning for this table.
Require Partition Filter
boolean
requirePartitionFilter
false
Optional. If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
 
Schema
code
schema
Optional. Describes the schema of this table.
Table ID
string
/ Required
tableId
Table ID of the table to update.
 
Table Reference
code
/ Required
tableReference
Reference describing the ID of this routine.
Time Partitioning
code
timePartitioning
If specified, configures time-based partitioning for this table.
View
code
view
Optional. The view definition.