Skip to main content
Version: 11.7.x

Single View Trigger Generator

caution

This Plugin is a BETA Plugin and, as such, is currently under active development. Pay attention using it.

The Single View Trigger Generator has 3 fundamental parts:

  • The consumption of pr-update messages
  • The strategy execution
  • The production of sv-trigger messages or pc records

For each part we will need to configure a set of environment variables and config maps.

Environment variables

The following table indicates all the available environment variables for you to customize the service to your needs.

note

When creating the service from the marketplace the following environment variables will be added for you with some default values but you still need to properly update them to make the service work

NameRequiredDescriptionDefault value
LOG_LEVELLevel to use for logging; to choose from: error, fatal, warn, info, debug, trace, silentsilent
MONGODB_URLMongoDB URL where the projections are stored-
MONGODB_NAMEMongoDB Database name where the projections are stored-
EVENT_STORE_CONFIG_PATHPath to the Event Store Config file-
EVENT_STORE_TARGETKafka topic to send the sv-trigger messages or MongoDB collection to save the pc records-
SINGLE_VIEW_NAMEThe name of the Single View-
KAFKA_PROJECTION_UPDATES_FOLDERPath to the Kafka Projection Updates folder-
ER_SCHEMA_FOLDERPath to the ER Schema folder-
PROJECTION_CHANGES_SCHEMA_FOLDERPath to the Projection Changes Schema folder-
MANUAL_STRATEGIES_FOLDER-Path to the custom strategies folder where the custom strategies scripts are stored-
TRIGGER_CUSTOM_FUNCTIONS_FOLDER-Path to the custom functions folder used in __fromFile__ values''

Config Maps

The service can use the following 3 config maps:

note

When creating the service from the marketplace the following config maps will be created for you with some default values but you still need to properly configure them to make the service work

ER Schema

The ER Schema config map contains the erSchema.json file which describes the relationships between each projection of the System of Records.

Remember to copy/paste the mount path into the ER_SCHEMA_FOLDER environment variable so the service can read the file. To know more on how to configure the file please go to the ER Schema page.

Projection Changes Schema

The Projection Changes Schema config map contains the projectionChangesSchema.json file which defines how to get to the base projection of the single view starting from the projection in which we received the ingestion message.

Remember to copy/paste the mount path into the PROJECTION_CHANGES_SCHEMA_FOLDER environment variable so the service can read the file. If you need more info on how to configure the projectionChangesSchema.json file, please refer to the Projection Changes Schema page.

Kafka Projection Updates

The Kafka Projection Updates config map contains the kafkaProjectionUpdates.json file which defines the topics from where to consume the Projection Updates and the strategy to apply to each message.

Remember to copy/paste the mount path into the KAFKA_PROJECTION_UPDATES_FOLDER environment variable so the service can read the file. If you need more info on how to configure the kafkaProjectionUpdates.json file, please refer to the Kafka Projection Updates page.

Event Store Config

The Event Store Config is a JSON file containing the configuration of the consumer and producer of the service itself and it has the following format:

{
"consumer": {
"kafka": {
// Kafka consumer configuration (see below)
}
},
"producer": {
"<kafka | mongo>": {
// Kafka or mongo producer configuration (see below)
},
}
}
caution

Mind that only one producer and consumer must be configured at a time so the service knows which kind to use. Providing more than one consumer or producer will fail the configmap validation and shut down the service at start up.

Consumers

At the moment you can only configure your consumer with kafka which will read pr-update messages from the Real-Time Updater. To configure it you must follow the JsonSchema specification below.

Kafka consumer config JsonSchema

{
"type": "object",
"required": ["brokers", "consumerGroupId"],
"properties": {
"brokers": {
"type": "string",
},
"consumerGroupId": {
"type": "string",
},
"consumeFromBeginning": {
"type": "boolean",
"default": false,
},
"ssl": {
"oneOf": [
{ "type": "boolean" },
{
"type": "object",
"additionalProperties": true,
"properties": {
"ca": {
"type": "string",
"description": "path to the file containing the CA certificate in PEM format",
},
"key": {
"type": "string",
"description": "path to the file containing the client private key in PEM format",
},
"passphrase": {
"type": "string",
"description": "password necessary to unlock the private key",
},
"cert": {
"type": "string",
"description": "path to the file containing the client certificate in PEM format",
},
},
},
],
},
"sasl": {
"type": "object",
"properties": {
"mechanism": {
"type": "string",
"enum": ["plain", "scram-sha-256", "scram-sha-512"],
},
"username": {
"type": "string",
},
"password": {
"type": "string",
},
},
},
"clientId": {
"type": "string",
},
"connectionTimeout": {
"type": "number",
},
"authenticationTimeout": {
"type": "number",
},
"reauthenticationThreshold": {
"type": "number",
},
"requestTimeout": {
"type": "number",
},
"enforceRequestTimeout": {
"type": "boolean",
},
"retry": {
"type": "object",
"properties": {
"maxRetryTime": {
"type": "number",
},
"initialRetryTime": {
"type": "number",
},
"factor": {
"type": "number",
},
"multiplier": {
"type": "number",
},
"retries": {
"type": "number",
},
},
},
"logLevel": {
"type": "string",
"enum": ["NOTHING", "ERROR", "WARN", "INFO", "DEBUG"],
},
},
}

Producers

For the producers you can choose between two options: Kafka or MongoDB (sv-trigger vs. pc). With MongoDB you will save Projection Changes on the DB just like the Real-Time Updater does. With Kafka instead it will send sv-trigger messages which will also be read by the Single View Creator by changing its configuration to do so. Here's the configuration specification for both:

MongoDB producer config JsonSchema

{
"type": "object",
"required": ["url", "dbName"],
"properties": {
"url": {
"type": "string",
},
"dbName": {
"type": "string",
},
},
}

Kafka producer config JsonSchema

{
"type": "object",
"required": ["brokers"],
"properties": {
"brokers": {
"type": "string",
},
"ssl": {
"oneOf": [
{ "type": "boolean" },
{
"type": "object",
"additionalProperties": true,
"properties": {
"ca": {
"type": "string",
"description": "path to the file containing the CA certificate in PEM format",
},
"key": {
"type": "string",
"description": "path to the file containing the client private key in PEM format",
},
"passphrase": {
"type": "string",
"description": "password necessary to unlock the private key",
},
"cert": {
"type": "string",
"description": "path to the file containing the client certificate in PEM format",
},
},
},
],
},
"sasl": {
"type": "object",
"properties": {
"mechanism": {
"type": "string",
"enum": ["plain", "scram-sha-256", "scram-sha-512"],
},
"username": {
"type": "string",
},
"password": {
"type": "string",
},
},
},
"clientId": {
"type": "string",
},
"connectionTimeout": {
"type": "number",
},
"authenticationTimeout": {
"type": "number",
},
"reauthenticationThreshold": {
"type": "number",
},
"requestTimeout": {
"type": "number",
},
"enforceRequestTimeout": {
"type": "boolean",
},
"retry": {
"type": "object",
"properties": {
"maxRetryTime": {
"type": "number",
},
"initialRetryTime": {
"type": "number",
},
"factor": {
"type": "number",
},
"multiplier": {
"type": "number",
},
"retries": {
"type": "number",
},
},
},
"logLevel": {
"type": "string",
"enum": ["NOTHING", "ERROR", "WARN", "INFO", "DEBUG"],
},
},
}

An example of a complete configuration would be:

{
"consumer": {
"kafka": {
"brokers": "localhost:9092,localhost:9093",
"clientId": "client-id",
"consumerGroupId": "group-id",
"consumeFromBeginning": true,
"logLevel": "NOTHING"
}
},
"producer": {
"mongo": {
"url": "mongodb://localhost:27017",
"dbName": "pc-sv-books-test"
}
}
}