An integration is comprised of component action steps that execute one after another in series. Multiple steps of an integration can come from the same component action. For example, two steps could be HTTP GET actions that fetch data from two different endpoints.

Integrations are triggered either on a schedule, or via a webhook invocation.

Integrations are created by your organization. You can publish versions of your integrations, and then deploy instances of those integrations to one or more of your customers.

We recommend that you follow our Getting Started tutorial to first acquaint yourself with integration development.

How to Create Your First Integration

Creating an Integration#

To create a new integration in the web app, click Integrations from the left-side menu, and then click the + Integration button in the upper-right. You will find yourself in a new integration designer screen. Click the name and description at the top left to give your new integration an appropriate name and description.

Listing and Searching Integrations#

To view all of the integrations your organization has created, click the Integrations link on the left-hand sidebar.

You can search for specific integrations by name by typing a part of the name in the upper search bar, or you can search by description by clicking the Filter button to the right of the search bar.

The Integration Designer#

After creating a new integration or selecting one from your list of integrations, you will find yourself in the integration designer. Here, you can build, test, and publish integrations.

The integration designer contains four important features:

  1. The configuration pane lets you set config variables that are used by your integration, and lets you configure individual steps of your integration.
  2. The testing pane lets you run tests of your integration, supply your integration with sample payloads, and see results of your integration test runs.
  3. The publication pane lets you publish a version of your integration, so it can be deployed to customers.
  4. The majority of the page is taken up by the integration editor pane. Here, you can add steps to your integration, create branches and loops, and generally arrange the flow of your integration.
Create your first few integrations using the integration designer

We recommend that you create your first few integrations through the integration designer to get a feel for how steps interconnect and data flow through an integration. If you prefer the integration designer in the web app, we encourage you to continue to use it. If you are a developer, you have the option to write your integration in YAML, which gives you the option to store your integrations in a source control system of your choosing.

Config Variables in Integrations#

As an integration builder you can use config variables and credentials to drive the logic of your integration.

Click the Config Variables button to open the Config Variables drawer. From there, you can define the configuration experience that will be used by your customer-facing teams. You can define names, descriptions, variable types, and optional default values of config variables that you will reference from your integration. You can use headers to organize your config variables, and descriptions to provide helpful hits to your customer-facing teams who will deploy your integrations.

When it comes time for your customer-facing teams to deploy your integration, they can enter or select configuration options and tailor the integration for a particular customer without the involvement of integration builders:

Config variables that you define in the config variable drawer can be used within your integration as inputs or steps, or through the Branch component to drive branching logic.

Config Variable Types#

There are several types of configuration variables:

  • String is a standard string of characters
  • Date follows the form mm/dd/yyyy
  • Timestamp follows the form mm/dd/yyy, HH:MM [AM/PM]
  • Picklist allows you to define a series of options that your integration deployment team members can choose from.
  • Code lets your integration deployment team to enter JSON, XML, or other formatted code blocks. This is handy if customers have unique formats for recurring reports, or other formatted documents that differ from customer to customer.
  • Credential is an API key, username/password, etc., that can be bound to one or more steps of your integration. See credentials.
  • Boolean allows your integration deployment team to choose either true or false.

Once config variables are enumerated in this list, they are available as inputs to actions within your integration.

Integration Triggers#

Integration triggers allow you to define when an instance of an integration should run. There are two types of triggers: schedule triggers and webhook triggers. If you would like your integration to run on a predefined regular basis, you should use a schedule trigger. If you would like to invoke your integration from another system, you should use a webhook trigger.

Scheduled Triggers#

Scheduled triggers allow you to create a regular schedule to dictate how often your integration should run. This is useful if you have an integration that should be triggered consistently at a specific time. You can set up your integration to run at the same time for all customers, or you can set up schedules on a per-customer basis.

To set up the same schedule for all customers, click the integration's trigger, open the Schedule tab, and enter the schedule you would like your integration to follow. You can configure your integration to run every X minutes, hours, days, or weeks:

You can alternatively select Custom and provide a cron string. For example, a trigger of */5 8-16 * * 1-5 would cause your integration to run every five minutes during business hours (8:00-16:55), Monday through Friday. For help computing a cron schedule, see this Cron Calculator.

To configure schedules on a per-customer basis, first create a config variable of type Schedule by clicking the Config Variables button. You can give your config variable any name you choose:

Then, click your integration trigger and reference the Config Variable you created:

When your integration deployment team later deploys an instance of your integration, they can configure a custom schedule for that instance.

Webhook Triggers#

When an integration is published and an instance of the integration is created, a webhook URL is associated with that instance. Executing HTTP POST requests on that webhook URL results in the instance being triggered to run.

The webhook URL is printed on the instance's page in the web app, or can be viewed by running

prism instances:list --extended --output json

JSON-formatted arguments can be passed through HTTP headers to the webhook URL, and used as inputs for the actions of the integration like this

curl \
--data '{"renderId":51266,"s3Bucket":"test-customer-renders","status":"complete"}' \
--header "Content-Type: application/json" \

Note that the payload of the POST request is available as output of the integration's trigger, shown on the left. Steps can then reference, etc as a reference input.

Posting Binary Data with Webhook Triggers#

If you have binary data (like an image or PDF) that you would like to post as part of your webhook invocation, you can pass that binary data in as part of your request. For example, if you have an image, my-image.png, you could invoke a test of an integration with:

curl \
--request POST \
--header 'Content-Type: image/png' \
--data-binary '@/path/to/my-image.png' \

The binary file can be accessed by subsequent steps by referencing

Integration Steps#

Actions, like downloading a file from an SFTP server or posting a message to Slack, are added as steps of an integration. Steps are executed in order, and outputs from one step can be used as inputs for a subsequent step.

The left portion of the integration designer lists a trigger, followed by the steps that constitute your integration.

Steps are executed in order. If one step fails, the integration test or instance stops, and if configured to do so, an alert monitor is triggered to alert your team members of the instance's failure.

Adding Steps to Integrations#

To add a step to an integration, click the

icon underneath the trigger or another action.

Select the action you would like to add to your integration. You can begin to type the name of the action you would like to add to filter the list of actions available.

Step Inputs and Outputs#

Once you have added a step to your integration, you will likely need to configure some inputs for that step. Some inputs are required, and denoted with a * symbol, while others are optional.

Inputs can take one of three forms:

  • A value is a simple string (perhaps a URL for an HTTP request).

  • A reference is a reference to the output of a previous step or trigger. For example, if a previous step pulls down a file from AWS S3 and the step is named Fetch my file, then you can reference Fetch my file as in input for another step, and that subsequent step will be passed the file that Fetch my file returned.

    Outputs from one step can be referenced by a subsequent step by referencing the previous step's results field. So, if a previous step returned an object - for example, if an HTTP - GET action pulled down some JSON reading { "firstkey": "firstvalue", "secondkey": "secondvalue" } - you access that secondvalue property in a subsequent step's input by referencing that HTTP - GET step and choosing results.secondkey in your Reference search.

  • A config variable references one of the integration's config variables. For example, we can select a config variable, CMS API Endpoint, as an input for one of our steps. Config variables can be distinct for each customer, so each customer can be configured with a different CMS API Endpoint.

To enter values for the input of a step, first select if you want the input to be a value, reference, or config variable. Then, enter a value for the value, name of a previous step and search for the reference, or name of a config variable for the config variable.

When a step runs, it optionally outputs data that can be used as input for subsequent steps. For example, a Dropbox Download File step will output a binary file that can then be passed in as input in an AWS S3 step.

For More Information: Component Action Inputs,

Changing Step Names#

By default steps are uniquely named after the action they invoke (so, they're named things like CSV to YAML, or Delete Object). To override that default name, click the step and click the default name to edit its name.

Like using descriptive variable names in a program, renaming steps allows you to give your steps descriptive names. For example, rather than a step being called Delete Object you can name it Delete Schematics File from Share. We recommend giving your steps descriptive names so your team members can read through integrations and understand their purpose more readily.

Note that once you change a step's name, you will likely need to update any reference to that step's outputs in code components. For information about referencing previous steps' outputs within a code component, click here

Reordering Steps#

Steps are executed in serial. To reorder steps, click and drag a step up or down.

Persisting Data Between Runs#

Sometimes it's helpful to save data from one execution of an integration so it can be used in a subsequent execution. Prismatic provides components that allow you to save some state from one run for use in a future run.

Why is this important or helpful? Imagine you have an integration that pulls down and processes data from a data source. Your integration recently processed a record with ID "123", and the next time your integration runs you want to ensure it processes ID "124" and above. You can persist "123" using a Save Value action, and then the next time your integration runs it can use Get Value to know that "123" was the most recently processed record. You can then build your integration to process newer records than the one that was saved.

Let's look at two components that take advantage of persisting data:

The Persist Data Component#

Data can be persisted between runs using the Persist Data component. Data are stored in key-value pairs, and values can be strings, numbers, objects, or lists.

You can store a key/value pair using the Save Value action, or you can use Persist Data's other actions to append to a persisted list. If you would like to save a timestamp instead, you can use the Save Current Time action to save the current time into a key of your choosing.

Later, in a subsequent run, you can fetch the value you saved using the Get Value action. If a key is not set, Get Value will return null.

You can remove data from a list, or remove a key/value pair altogether, using Persist Data's other actions.

The Process Data Component#

For some integrations, it's handy to know what data from a list you've already processed, and what data you haven't. For example, your integration might pull down a list of orders from a data store to be converted into invoices. Your order list might look like this:

"orderid": 123,
"items": {
"widgets": 5,
"whatsits": 7
"orderid": 122,
"items": {
"whoseits": 2

The Process Data's DeDuplicate action allows you to pass in such a list of objects in descending order, along with a unique identifier (like orderid). The action uses data persistence between runs to track the most recently processed item (in this example, "orderid": 123). The next time this integration is run and a list of orders are passed into the DeDuplicate step, the step will return all objects in the list that appear before the object with "orderid": 123. So, if there's an order ID 124, 125, etc., it'll return those. That way, the subsequent execution will ignore order ID 123 and before, and instead process only more recent orders.

Loops in Integrations#

How to Loop Over Files in an Integration

For many integrations it's handy to be able to loop over a list of items. If your integration processes files on an SFTP server, for example, you might want to loop over a list of files on the server. If your integration sends alerts to users, you might want to loop over a list of users.

Prismatic provides the loop component to allow you to loop over a list of items. After adding a loop step to your integration, you can then add steps within the loop that will execute over and over again.

The loop components takes one input: a items. Items is a list - a list of numbers, strings, objects, etc. For example, one step might generate a list of files that your integration needs to process. Its output might look like this:


The loop component can then be configured to loop over those files by referencing the results of the list files step:

Subsequent steps can reference the loop step's currentItem and index parameters to get values like path/to/file3.txt and 2 respectively:

For More Information: The Loop Component, Loops in YAML, Looping Over Files Quickstart

Looping Over Lists of Objects#

The list of objects passed into a loop component can be as simple or complex as you like.

In this example, if we have a loop named Loop Over Users, and the loop was presented items in the form:

"name": "Bob Smith",
"email": ""
"name": "Sally Smith",
"email": ""

Then the loop will iterate twice - once for each object in the list, and we can write a code component that accesses the loop's currentItem and index values and sub-properties of currentItem like this:

module.exports = async (
{ logger },
{ loopOverUsers: { currentItem, index } }
) => {`User #${index + 1}: ${} - ${}`);

That will log lines like User #1: Bob Smith -

Return Values of Loops#

A loop will collect the results of the last step within the loop, and will save those results as an array. For example, if the loop is presented the list of JSON-formatted user objects above, and the last step in the loop is a code component reading:

module.exports = async(context, loopOverUsers: { currentItem }) => {
return {data: `Processed ${}`}

Then the result of the loop will yield:

["Processed", "Processed"]


The branch component allows you to add branching logic to your integration. Think of branches as logical paths that your integration can take. Given some information about config variables or step results, your integration can follow one of many paths.

Branch actions are handy when you need to conditionally execute some steps. Here are a couple of examples of things you can accomplish with branching:

Example 1: If each of your customers have a different preferred file store, your integration can branch into "save to AWS S3", "save to Azure Files" or "save to DropBox" branches depending on a customer-specific config variable value.

Example 2: Your customers want to be alerted when their rocket fuel level is below a certain threshold. You can branch into "send an alert" and "fuel level is okay" branches depending on results of a "check rocket fuel level" step.

Example 3: You want to upsert data into system that doesn't support upserting. You can check if a record exists, and branch into "add a new record" or "update the existing record" branches depending on if the record exists.

For More Information: The Branch Component, Branching in YAML

Branching on a Value#

Adding a Branch on Value action to your integration allows you to create a set of branches based on the value of some particular variable. It's very similar to the switch/case construct present in many programming languages.

How to Branch on a Single Value in an Integration

Consider example 1 above. Suppose your customers each have a config variable named fileStore that might be equal to aws, azure or dropbox, depending on which file storage service they like to use. You can use the config variable fileStore as the input value, and create three distinct branches for AWS, Azure, and Dropbox. From there, you can add the appropriate actions for saving a file to AWS S3, Azure Files, and DropBox respectively:

For More Information: Branch on Value Action

Branching on an Expression#

The Branch on an Expression allows you to create branches within your integration based on more complex inputs. You can compare values, like config variables, step results, or static values, and follow a branch based on the results of the comparisons.

How to Branch on an Expression in an Integration

Consider example 2 above. You have a step that checks rocket fuel level for a customer, and you want to alert users in different ways if their fuel levels are low. You can express this problem with some pseudocode:

if fuelLevel < 50:
else if fuelLevel < 100:

To express this pseudocode in an integration, add a step that looks up rocket fuel level. Then, add a Branch on an Expression action to your integration.

Create one branch named Fuel Critical, and under Condition Inputs check that results of the fuel level check step is less than 50. Then, create another branch named Fuel warning and check that results of the fuel level check step is less than 100.

This will generate a branching step that will execute the branch Send Alert SMS if fuel levels are less than 50, Send Warning Email if fuel levels are less than 100, or will follow the Else branch if fuel levels are 100 or above.

You can compare config variables, results from previous steps, or static values to one another using the following comparison functions:

  • equals: check that the two fields are equal to one another.
  • does not equal: check that the two fields are not equal to one another.
  • is greater than: check that the left field is greater than the right field.
  • is greater than or equal to: same as is greater than, but the values can also be equivalent.
  • is less than: check that the left field is less than the right field.
  • is less than or equal to: same as is less than, but the values can also be equivalent.
  • contained in: check that the right field contains the left field. The right field must be an array. For example, you can check if "b" is contained in ["a","b","c"].
  • not contained in: check that the right field (an array) does not contain the left field.

Multiple expressions can be grouped together with And or Or clauses, which execute like programming and and or clauses:

if ((foo > 500 and bar <= 20) or ("b" in ["a","b","c"]))

For More Information: Branch on Expression Action

Converging Branches#

Regardless of which branch is followed, branches always converge to a single point. Once a branch has executed, the integration will continue with the next step listed below the branch convergence.

This presents a problem: how do steps below the convergence reference steps in branches that may or may not have executed (depending on which branch was followed)? In your integration you may want to say "if branch foo was executed, get the results from step A, and if branch bar was executed, get the results instead from step B." Prismatic provides the Select Executed Step Result to handle that scenario.

Imagine that you have two branches - one for incoming invoices, and one for outgoing invoices, with different logic contained in each. Regardless of which branch was executed, you'd like to insert the resulting data into an ERP. You can leverage the Select Executed Step Result action to say "get me the incoming or outgoing invoice - whichever one was executed."

This action iterates over the list of step results that you specify, and returns the first one that has a non-null value (which indicates that it ran).

Within the component configuration drawer, select the step(s) whose results you would like, and the Select Executed Step Result step will yield the result of whichever one was executed.

Testing Integrations#

The integration designer provides a sandbox for testing integrations from the bottom-right Testing pane. There, you can invoke an integration, configure test credentials and config variables, and view test logs in real time.

You can test your integration after you set testing config variables and credentials by clicking the SAVE & RUN TEST button.

How to Test an Integration that Requires Config Variables and Credentials

Test Config Variables and Credentials#

If your integration uses config variables, you can specify testing values for those variables under the Configuration tab of the Testing pane. If you specified default values for your config variables, those will be preset for you. Otherwise, fill in testing values and credentials for the purposes of testing your integration.

We recommend that you create testing, non-production credentials for integration sandbox tests.

Test Run Outputs#

After running an integration test, the output of each step is available in the Step Outputs tab of the Test Runner pane. Select which step you would like to see results for from the dropdown menu. You can reference those results in subsequent steps. This is helpful for debugging and verifying the flow of data within your integration.

Test Run Logs#

Output for your test run is displayed under the Logs tab in the Test Runner pane. If any of your steps log output or throw errors, you will see those logs and errors in this window.

Synchronous and Asynchronous Integrations#

Integrations are configured by default to run asynchronously. That means that whenever an integration is invoked by trigger webhook URL, the integration begins to run and the system that invoked the integration can go on to complete other work. This is the most common case for integrations - you want to start up an instance when some certain event occurs, but you don't want to wait around while the instance runs.

Sometimes, though, it's handy for an application to get information back from the instance that was invoked. For example, you might want your proprietary software to wait until an instance runs to completion before completing other work. In that case, you can choose to run your integration synchronously. Then, when your software makes a call to the instance's webhook trigger URL the HTTP request is held open until the instance run is complete.

How to Run Integrations Synchronously

When you choose to run your integrations synchronously, the HTTP request that invokes an instance returns a redirect to a URL containing the output results of the final step of the integration. For example, if the final step of your integration pulls down JSON from, you will see this when you invoke the integration synchronously:

curl \
--data '{}' \
--header "Content-Type: application/json" \
--header "prismatic-synchronous: true" \
--location \
{"id":1,"name":"Leanne Graham","username":"Bret","email":"","address":{"street":"Kulas Light","suite":"Apt. 556","city":"Gwenborough","zipcode":"92998-3874","geo":{"lat":"-37.3159","lng":"81.1496"}},"phone":"1-770-736-8031 x56442","website":"","company":{"name":"Romaguera-Crona","catchPhrase":"Multi-layered client-server neural-net","bs":"harness real-time e-markets"}}
Configure synchronous requests to follow redirects

When you invoke an instance synchronously, your request is accepted, and then is redirected to a URL containing the response payload. Make sure your request library is configured to follow redirects.

For curl, for example, omit the -X or --request POST flags since they override the HTTP verb used on the redirect, and include a -L / --location flag so it follows redirects.

You can toggle if your integration is synchronous or asynchronous by clicking the Execution Configuration button on the right side of the integration designer.

You can also pass in a header, prismatic-synchronous with a webhook invocation to instruct your instance to run synchronously or asynchronously:

curl \
--header "prismatic-synchronous: false" \
--request POST \

HTTP Status Codes for Synchronous Integrations#

When an instance is configured to run synchronously or is invoked synchronously with the prismatic-synchronous header, the HTTP response returns a status code 200 - OK by default. It's sometimes useful, though, to return other HTTP status codes. For example, if a client submits wrongly formatted data to be processed by an instance, it might be helpful to return a 406 - Not Acceptible or 415 - Unsupported Media Type.

To accomplish this, you can configure the final step of your integration to return a different status code. Most commonly, you can add a Stop Execution step to the end of your integration, and specify an HTTP response that it should return.

$ curl \
--verbose \
--location \
--header "prismatic-synchronous: true" \
* Connected to ( port 443 (#0)
< HTTP/2 415

If you would like to return HTTP status codes from a custom component at the end of your integration instead, return an object with a statusCode attribute instead of a data attribute:

return { statusCode: 415 };

Synchronous Call Limitations#

Response Body and Status Code Limitations#

When an integration is invoked synchronously, the integration redirects the caller to a URL containing the output results of the final step of the integration. If the final step of the integration is a Stop Execution action, or any custom component action that returns a statusCode, the redirect does not occur and the caller receives a null response body instead.

API Gateway Size and Time Limitations#

AWS API Gateway times out requests after 29 seconds, and our maximum response size is 500MB. So, to get a response from an instance that is invoked synchronously, please ensure that your integration runs in under 29 seconds and produces a final step payload of less that 500MB.

If your integration regularly takes over 29 seconds to run, or produces large responses, we recommend that you run your integrations asynchronously instead. When you invoke an integration asynchronously you receive an executionId:

curl \
--data '{}' \
--header "Content-Type: application/json" \

That execution ID can be exchanged later with the Prismatic API for logs and step results using the executionResult GraphQL mutation.

Integration Retry Configuration#

If your organization has a professional or enterprise subscription, you can configure your asynchronously-invoked instances to retry if they fail to run to completion. This is handy if your integration relies on a flaky third-party API - the third party API might be down briefly, but back up a few minutes later. You can configure your integration to try again in a few minutes when the API is back up, without needing to trigger unnecessary alert monitors for your team.

How to Configure an Integration to Retry

To configure an integration to retry its execution, click the Execution Configuration button on the right side of the integration designer and toggle the Retry option on.

Max Attempts indicates the maximum number of times (up to 5) that Prismatic will run the same instance invocation in the event of failure. If an instance has failed more than Max Attempts number of times, the run will be marked as execution failed and relevant alert monitors, if configured, will fire.

Minutes Between Attempts indicates the number of minutes that Prismatic should wait before trying to run an instance again. If, for example, you specify 4 minutes, and the first instance invocation failed at 10:24, then the instance will attempt to run again at 10:28, 10:32, 10:36, 10:40 and 10:44 if it repeatedly fails to complete.

Note: Due to the nature of scheduled AWS Lambda functions, scheduled retries are precise to the minute (as opposed to to the second). So, an attempt that fails at 10:24 with a Minutes Between Attempts set to 4 might trigger next at 10:28 or 10:29.

If you have Exponential Backoff selected, your instance will wait a longer and longer time between attempted runs, using an exponential factor of 2. For example, if your Minutes Between Attempts is set to 3 and Exponential Backoff is set, then retry attempts will fire after 3, 6, 12, 24, and 48 minutes.

Note: The maximum amount of time an instance will take before retrying is 24 hours. If backoff is computed to wait more than 24 hours, it'll fire after waiting 24 hours instead.

Finally, Retry Cancelation gives you the ability to cancel a set of retry attempts if a more recent invocation of the instance has occurred. For example, if your integration takes a payload as part of its invocation and then updates some third party system with that payload data, you may want to cancel older invocations if newer data comes in. Otherwise, you may end up updating some third party system with newer data, and then the retry would overwrite the newer data with older data afterwards.

To configure retry cancelation, select a unique request ID from the trigger payload. For example, you might pass in a header, x-my-unique-id: abc123 as part of your trigger payload. If another invocation with that header comes in that updates resource abc123, you might want to cancel currently queued retries. To do that, select your trigger's results.headers.results.headers.x-my-unique-id reference as your Unique Cancelation ID.

Cancelation IDs do not need to be headers. Instead, you can select a key from the payload body. For example, if your instance invocation looks like this:

curl -L -d '{"productId":"abc123","price":"250","description":"A box of widgets"}' \
--header "Content-Type: application/json" \

You can key your unique cancelation ID off of

For More Information: Instance Retry and Replay

Integration Attachments#

Your team can save and share integration-related documents alongside an integration by clicking on the Attachments tab from the integration designer page.

Publishing Integrations#

By publishing an integration, you mark it ready for deployment to customers.

Open the

icon on the left side of the page, type a note about the changes you made to the integration, and then click PUBLISH to publish your integration:

Forking Integrations#

How to Make a Copy of an Integration

Sometimes you will want to make a copy of an integration and modify the copy. This is called forking an integration.

From the integration designer, click the

icon on the bottom left of the page. Then, click Fork Integrations. Give your forked integration a new name and description and then click ADD.

View Deployed Instances#

To view all instances of an integration that have been deployed, click the Instances tab -

icon - from the integration's integration designer screen. This screen will display customers to which this integration has been deployed.

Deleting Integrations#

Deleting an integration will delete all instances of that integration

Use caution when deleting an integration. Deletion of an integration also deletes all deployed instances of that integration.

From the integration designer, click the

icon on the bottom left of the page. Click the Delete Integration button on the bottom of the page and confirm by clicking REMOVE INTEGRATION.