Skip to main content

Logging

Access to logs is critical for building, deploying, and supporting integrations. If an alert monitor alerts your team to an instance behaving unexpectedly, your team needs to know precisely when an instance ran, the status of the instance's steps as they finished, and what happened if a step failed to run as expected. Prismatic provides access to logs of all instance invocations and tests.

You can also elect to stream logs to an external logging system.

Log retention#

Logs, execution data and step results are retained for 14 days and are purged thereafter. If you have an enterprise plan and require longer log retention, please reach out to support to discuss adjusting retention policies.

Viewing logs for all customers#

To see logs for all instances for all of your customers, click the Logs link on the left-hand sidebar. Here you will see log messages, their timestamps in your local time, the name of the instance, the name of the integration the instance was created from, and the name of the customer the instance was deployed to.

Viewing logs for a specific customer#

To view logs for a specific customer, click the Customers link on the left-hand sidebar. Click into a customer, and then select the customer's Logs tab. Here you will see log messages, their timestamps in your local time, the name of the instance, and the name of the integration the instance was created from.

For More Information: Customers

Viewing logs for a specific instance#

To view logs for a specific instance, access the instance either by clicking Instances on the left-hand sidebar and selecting an instance, or by clicking Customers on the left-hand sidebar, selecting a customer, and selecting an instance under the Instances tab. Once in an instance, select the Logs tab. Here you will see log messages, their timestamps in your local time, the name of the integration the instance was created from, and the name of the customer the instance was deployed to.

For More Information: Instances

Searching and filtering logs#

You can search for specific messages in logs by typing part of the message into the Search Logs search bar on the top of any log page.

Search customer logs in Prismatic app

For more information on a specific log line, clicking the log line will bring up an additional information panel on the bottom of the screen.

Customer log details in Prismatic app

Additionally, you can filter logs by Log Severity (Error, Warn, Info, Debug), by Timestamp, or by Integration by clicking the Filter drop-down to the right of the search bar.

Filter customer logs in Prismatic app

Viewing logs for an integration test#

Within the integration designer, you can test your integrations as you build them. Logs for those tests are visible in the Test Runner pane under the Runner tab.

Logs for integration tests in Prismatic app

For More Information: Testing Integrations

Viewing connection logs#

Connections that are used for testing in the integration designer, and those assigned to deployed instances, create logs. If a connection ever throw a connection error (for example, if the credentials in the connection have expired), you will see that in the connection's logs.

To view a connection's logs, click the log icon to the right of the connection.

Connection logs in Prismatic app

You can click any log line in the popover that's comes up to get more information about that log line.

What gets logged?#

If a component invokes context.logger.{debug,info,warn,error}() within its code, that log line is saved in Prismatic's logging system.

In addition to logs issuing from components, you will see the following types of log lines in your logs:

TypeExamplePurposeLog Level
Instance StartStarting Instance 'Sample Instance'Indicates the beginning of a run of an instance.info
Instance EndEnding Instance 'Sample Instance'Indicates that an instance ran successfully to completion.info
Step StartedFetch file from DropboxDisplays the name of the step that was invoked.info
Step Failed{{ERROR MESSAGE}}Indicates that a step of an instance failed to run, and displays the related error message.error

For More Information: context.logger

Log levels#

Log levels in Prismatic include debug, info, warn, and error. debug and info lines are presented in the web app with a green dot

next to them. warn messages are accompanied by yellow dots , and error by red dots .

Log levels illustrated and explained

External log streaming#

Organizations on an enterprise plan can configure Prismatic logs to stream to an external logging service (like DataDog or New Relic), or to your own proprietary logging system. Logging services generally have robust APIs that accept HTTP POST requests with JSON payloads containing log messages. All you need to configure in Prismatic is where to send the logs, what format the log data should take, and specify any headers that your logging service requires (like an authorization header or API key).

To configure log streaming to an external service, open Settings on the left-hand sidebar, and then select the Log Streams tab. Create a new log stream by clicking + Log stream.

Add log stream in Prismatic app

Enter the URL the logs should be sent to, and add any headers that your logging service requires (many require an API key or authorization header).

Configure log streaming to external service in Prismatic app

Next, create a log message template. This template defines the shape of the message that will be sent to your logging service. You can add placeholders for the log message, as well as information about the instance, customer, flow and step that generated the message.

Create log message template in Prismatic app

Your template can include the following placeholders, which are substituted when a message is sent to the external logging system:

PlaceholderDescriptionExample
{{ timestamp }}Timestamp of the message in milliseconds since epoch1637087683123
{{ timestamp_s }}Timestamp of the message in seconds since epoch1637087683
{{ timestamp_ns }}Timestamp of the message in nanoseconds since epoch1637087683123000000
{{ timestamp_iso }}Timestamp of the message in ISO format"2021-11-16T18:34:43.123Z"
{{ message }}The full message body"This is a test"
{{ severity }}Name of the log level (debug, info, warn, error, metric)"warn"
{{ instanceId }}The global ID that corresponds to the instance of the integration"SW5zdEXAMPLE"
{{ instanceName }}Name of the instance being executed"Update Inventory"
{{ flowConfigId }}The global ID of the instance's configured flow"SW5zdEXAMPLE"
{{ integrationId }}The global ID of the version of the integration that is deployed"SW5zdEXAMPLE"
{{ integrationName }}Name of the integration being executed"Update Inventory"
{{ flowId }}The global ID of the flow of the deployed integration"SW5zdEXAMPLE"
{{ flowName }}Name of the integration flow being executed"Remove inventory after order fulfillment"
{{ stepName }}Name of the step being executed, if available"Loop over order items"
{{ isTestExecution }}Is this log from a test in the integration designer?true
{{ executionId }}The global id that corresponds to the execution"SW5zdEXAMPLE"
{{ customerExternalId }}The external ID assigned to a customer"abc-123"
{{ executionErrorStepName }}Name of the step that resulted in an execution error"Loop over order items"
{{ durationMS }}Duration in milliseconds of the associated step or overall execution"1000"
{{ succeeded }}Whether the associated step or execution succeeded"true"
{{ errorMessage }}Error message for the associated step or execution"This is an error"
{{ retryAttemptNumber }}The number of retry attempts of the associated step or execution"0"
{{ retryForExecutionId }}The global id associated with the original execution in the case of execution retry"SW5zdEXAMPLE"

This template works well for many logging platforms, but may need to be modified to fit your needs:

Default message template#

{  "message": {{ message }},  "timestamp": {{ timestamp }},  "severity": {{ severity }},  "service": "Prismatic",  "instance": {{ instanceName }},  "customer": {{ customerExternalId }},  "integration": {{ integrationName }},  "isTestExecution": {{ isTestExecution }},  "flow": {{ flowName }},  "step": {{ stepName }},  "executionid": {{ executionId }},  "instanceId": {{ instanceId }},  "flowConfigId": {{ flowConfigId }},  "integrationId": {{ integrationId }},  "flowId": {{ flowId }},  "executionErrorStepName": {{ executionErrorStepName }},  "duration": {{ durationMS }},  "succeeded": {{ succeeded }},  "errorMessage": {{ errorMessage }},  "retryAttempt": {{ retryAttemptNumber }},  "retryForExecutionId": {{ retryForExecutionId }}}

Testing log streaming#

Once you have saved it, you can test your external logging configuration by clicking the Test payload button on the top right of a log stream screen. This will send a test log message to your external logging system, substituting test values like "Test message" and "Test integration" into your template.

Note: If your external logging provider uses CORS to prohibit logs from being sent directly from a web browser, you may not be able to use the Test payload button. You may see a CORS error in your web browser's developer console. In that case, save your configuration and run a test of any integration - logs from your execution should flow to your external logging provider.

Streaming logs to DataDog#

DataDog is an application monitoring platform that includes a logging service. To stream logs to DataDog, you will first need to generate an API key. Make sure you generate an API key (not an application key).

Next, enter https://http-intake.logs.datadoghq.com/api/v2/logs as your endpoint. Add a header with a name of DD-API-KEY and enter the API key that you generated.

Configure DataDog log streaming in Prismatic app

The default message template (above) works well for DataDog, but you can modify it to adjust attribute names, etc, as you see fit.

Once configured, logs from all of your customers' enabled instances, and all of your test runs within the integration designer will be streamed to DataDog:

List of logs in DataDog app
Testing Payloads with DataDog

For security reasons, DataDog prohibits sending logs directly from your web browser when using an API key. Because of that, the TEST PAYLOAD button does not work with DataDog. Rest assured, you should start seeing instance logs in DataDog as long as your API key is valid.

Streaming logs to New Relic#

New Relic is an application monitoring platform that includes a logging service. To stream logs to New Relic, you will first need to generate an API key. Make sure you create an INGEST - LICENSE key.

Now, create a new external log stream. Enter https://log-api.newrelic.com/log/v1 for the endpoint if your New Relic data is hosted in the US, or https://log-api.eu.newrelic.com/log/v1 if it is hosted in the EU. Under headers, create a header named X-License-Key and enter the license key you generated above. Note: you're given the option to copy your "key" or "key ID" - you want to "Copy key".

Configure New Relic log streaming in Prismatic app

The default message template (above) works well for New Relic, but you can modify it to adjust attribute names, etc, as you see fit.

Once configured, logs from all of your customers' enabled instances, and all of your test runs within the integration designer will be streamed to New Relic:

List of logs in New Relic app

Logging metrics to an external service#

In addition to log lines, you can use context.logger to write out objects containing metrics that you wish to send to an external streaming service.

For example, a code component can contain a line like this:

logger.metric({  inventoryItem: {    id: "123",    price: 10.55,    quantity: 3,  },});

Your external streaming configuration can then extract attributes from the object passed to metric(). For example, your configuration could read:

{  "message": {{ message }},  "timestamp": {{ timestamp }},  "severity": {{ severity }},  "itemId": {{ inventoryItem.id }},  "itemPrice": {{ inventoryItem.price }},  "itemQuantity": {{ inventoryItem.quantity }}}

Any time a metric line that contains inventoryItem.id, etc. is encountered, those additional attributes will be added to the payload that is sent to the logging system. Messages that don't contain inventoryItem.id simply won't pass an itemId to your logging service.

info

When logger.metric() is called, {{ message }} is the stringified JSON version of the object, and {{ level }} is set to 99.