Sending Data from Orchestration Engine to Celonis data model
This feature is currently available as a Private Preview only
During a Private Preview, only customers who have agreed to our Private Preview usage agreements can access this feature. Additionally, the features documented here are subject to change and / or cancellation, so they may not be available to all users in future.
For more information about our Private Preview releases, including the level of Support offered with them, see: Feature release types.
Orchestration Engine's Process Orchestrations together with Action Flows allow for seamless interaction with Celonis knowledge models. You can use the data and signals received from Celonis as events in your business Process Orchestrations. When the processes finish, you can send back a response to Celonis using action feedback.
You can reuse data from Process Intelligence Graph in your Process Orchestration with the help of dedicated Action Flow modules.
When your Process Orchestration is completed, you can Sending Data from Orchestration Engine to Celonis Platform to Celonis Platform.
When your Orchestration Engine instance is set up, you can send post-action feedback to the Celonis data model with the results gathered in the process context. To send the data, you need an extractor that brings data from Orchestration Engine to the Celonis data model. For Orchestration Engine, you can customize the extractor so that it shares the data that you want to send.
Prerequisites
Make you have a Data Pool created in your Celonis Platform instance.
You can create your own data extractor or use the one prepared by Celonis. The steps below show how the Emporix Process Context by Template data connection works.
To get the Process Context by Template extractor, contact the support team.
To create a data connection:
Go to Data Pools and choose Add Data Connection.
This creates a connection to your Celonis Data Pool that links to Orchestration Engine as the Data Source.
Choose Connect to Data Source.
Choose the Emporix Process Contexts by Template as a custom connector.
Enter the required data:
Name: enter a unique name for your new data connection.
API URL: https://{team}.{realm}.celonis.cloud
AUTH_URL: https://{team}.{realm}.celonis.cloud/oauth2/token
Process orchstration ID: Click the selected Process Orchestration. You can find the ID of your Process Orchestration in Studio.
The ID appears in the once you click on three dots nex to the Process orchestrator asset in Studio and click on Key
Client ID and Client Secret: you can find the ID and the secret in the Emporix Developer Portal, in the Manage API Keys section -> Orchestration Engine API data.
The new connection will be created and visible in the Data Connections list.
If you want to build your extractor from scratch, you can customize the connection in the Celonis Data Pool connections. However, using the OEextractor, the basic setup is already prepared. The only thing to customize is the process context data coming from your Process Orchestration.
These steps show how Emporix prepares the setup:
Choose the three-dots icon next to your connection in Data Pools and click Customize.
Add or edit the parameters for the connection.
Check the authentication method. It should be the OAuth2 (Client Credentials) method with the following endpoint and type details:
Get Token Endpoint: {Connection.AUTH_URL}/oauth/token
Grant Type: client_credentials
Don't modify this configuration.
Check the process context endpoint configuration.
The response that is visible, is the part that can be customized:
Make sure the context$processId is checked as a Primary Key. Without the context$processId, it is not possible to link the child tables back to the parent.
Go to Studio and choose the Process Orchestration that you have configured the connection with Celonis for.
In the Process Orchestration, add a Post Action Feedback as a process step.
You can configure the post action feedback event to send only the data that you want to share with Celonis. To do that, you need to configure and map the JSON string in the completion event of the post action feedback scenario in Make. In this example, it's the product ID, stock, and quotes information. All the other data gathered in the process context is ignored when sending the context to Celonis.
As a result of such a configuration of the Make scenario, the data imported by the extractor changes and is limited to the information that we chose to send. In this quotes example, you can see a new nested table created by the extractor: execution_context$context&celonis_postaction_feedback$quotes
If you want to limit the date from when you load the post action feedback data, you can use the createdSince filter.
In Celonis, go to Data Pools and open Transformations of the Data Processing.
You can set the filter in Data Pools > Your extractor > Data Jobs > Load Context:
Choose the data jobs for which you want to modify the load.
Go to Time Filter, configure the filter, and customize the creation period.
For more details and instructions on how the Celonis data model and transformation look, see the /document/preview/442214#UUID-1a45be85-2ed2-fce5-1733-a4d93e26edba documentation.
It's recommended to create additional filters to set some limitations on the load context and its content.
To get the data only from the finished Process Orchestration runs (and to exclude gathering the data for Process Orchestrationes that are running), you can add a filter that fetches data only from Process Orchestrationes with a finished status.
You can set the filter in Data Pools -> Your Extractor -> Data Jobs -> Load Context.
In the Filter Statement section add status = 'finished'.
To get the data only from the finished Process Orchestration runs and to make sure only the changed data is loaded, you can use a filter for the finished status and modified time.
You can set the filter in Data Pools -> Your Extractor -> Data Jobs -> Load Context.
In the Delta Filter Statement section add status = 'finished' AND updatedSince = <%=max_modfied_time%>.
The <%=max_modfied_time%> is a customer extractor parameter that is created in the Parameters tab.
Provide these values in the fields:
type: dynamic
table: process_context
column: metadata$updatedAt
operation type: FIND_MAX
data type: date
default date: past date
To generate logs, go to Data Pools > Your Extractor > Data Jobs > Extract Event Log.
The script that is prepared creates an event logs table based on the name of the Process Orchestration. It supports delta tables. The script creates a single event for each Process Orchestration.
Use this script:
CREATE TABLE IF NOT EXISTS event_log(execution_template_name varchar(256), instance_id varchar(128) PRIMARY KEY, created_at TIMESTAMP, last_modified TIMESTAMP, status varchar(64)); DROP TABLE IF EXISTS event_log_new; CREATE TABLE event_log_new (execution_template_name varchar(256), instance_id varchar(128) PRIMARY KEY, created_at TIMESTAMP, last_modified TIMESTAMP, status varchar(64)); INSERT INTO event_log_new SELECT "execution_templates"."name" as "execution_template_name", "execution_context"."context$processid" as "instance_id", "execution_context"."metadata$createdAt" as "created_at", "execution_context"."metadata$updatedAt" as "last_modified", "execution_context"."status" as "status" from "execution_templates", "execution_context" where "execution_context"."context$executionTemplateID" = "execution_templates"."id"; MERGE INTO event_log USING event_log_new ON event_log_new.instance_id = event_log.instance_id WHEN MATCHED THEN UPDATE SET last_modified = event_log_new.last_modified, status = event_log_new.status WHEN NOT MATCHED THEN INSERT (execution_template_name, instance_id, created_at, last_modified, status) VALUES (event_log_new.execution_template_name, event_log_new.instance_id, event_log_new.created_at, event_log_new.last_modified, event_log_new.status);
The activity table with the data gathered as per the script, with the Process Orchestration name, ID, creation time, last modification time, and status:
Set up the Data Model to establish relations between all of the extractor's components. Go to Data Models -> Open the model -> Choose New Foreign Key.
To create the connections, use the elements from the Dimension table and the Fact table.
Mandatory relations:
Link Process Context (Dimension) with Event Log (Fact) by linking event_log.instanceid and execution_context.context$processid.
execution_templates.id and exection_context.context$executionTemplateID.
Recommended relations:
Use Event Log as activity table
Link Process Context (Dimension) with the additional tables (Fact) using execution_context.context$processid to load any additional data as a part of your post action feedback event
Optional relations:
Link Execution Template (Dimension) with Execution Template Triggers (Fact) by linking execution_templates.id and execution_templates$trigger.execution_templates_id.
If you want Orchestration Engine multi-tenant separation of data, link Account (Dimension) with Execution Template (Fact) by linking account.id and execution_templates.tenant.
Here's an example of a configured data model:
Execution context: it's the central component, it belongs to one Process Orchestration.
Process Orchestration: it can have many process contexts and many Process Orchestration's trigger events, but it can have only one account.
Event log: it's a 1:1 relationship with the process context.
Account: means the Orchestration Engine tenant, it can have many Process Orchestrationes.
To learn more about the configuration of Data Models in Celonis, see the /document/preview/442347#UUID-e40faca9-8748-f42f-05fd-97d2b693478a documentation.
It's possible to load all the data that you have in your Process Orchestration's process context. However, we recommend that you send only the data used by Celonis.
The mapping of process context data with can also be done manually, instead of creating a Make scenario for that.
However, this can be error prone as the Process Orchestrationes can be edited at any time by different users and the mapping is not reflected automatically.