Single View Creator
The Single View Creator is the service that keeps the Single View updated with data retrieved from Projections. This service is available as a plugin or as a template:
- plugin: it allows you to use the Single View Creator as a black-box. You just need to configure it through the Config Maps and environment variables
- template: it gives you access to the source code of the Single View Creator, which will be hosted on a Git repository. You will need to update its dependencies and maintain the code.
We strongly recommend using the plugin. The template is supposed to be used only for advanced use cases where the plugin cannot be used.
Environment Variables
Name | Required | Description | Default value |
---|---|---|---|
CONFIGURATION_FOLDER | - | Folder where configuration files are mounted | /home/node/app/src/ |
LOG_LEVEL | ✓ | Level to use for logging | - |
HTTP_PORT | - | Port exposed by the service | 3000 |
TYPE | ✓ | Identifies the type of projection changes that need to be read. It should be the same as the Single View name you want to update. | - |
SCHEDULING_TIME | - | a quantity of time in milliseconds, every X milliseconds the service wake up and check if there are some projections changes in NEW state to work on. The service continue working until no more new projections changes are found, if so he goes to sleep for X milliseconds. | 60000 |
PROJECTIONS_MONGODB_URL | ✓ | MongoDB connection string where projections are stored. Must be a valid uri | - |
SINGLE_VIEWS_MONGODB_URL | ✓ | MongoDB connection string where single view must be stored. Must be a valid uri | - |
PROJECTIONS_CHANGES_MONGODB_URL | - | The db from where projections changes are read. If not set, PROJECTIONS_MONGODB_URL is used. | - |
PROJECTIONS_CHANGES_DATABASE | ✓ | The db from where projections changes are read. | - |
PROJECTIONS_DATABASE | ✓ | The db from where projections are read. If not set, PROJECTIONS_CHANGES_DATABASE is used. | - |
PROJECTIONS_CHANGES_COLLECTION | - | if you have set a custom projection change collection name from advanced, then set its name. Otherwise, it is fd-pc-SYSTEM_ID where SYSTEM_ID is the id of the System of Record this single view creator is responsible for. | - |
SINGLE_VIEWS_DATABASE | ✓ | The db from where single views are written. | - |
SINGLE_VIEWS_COLLECTION | ✓ | It must be equals to the Single View name the service is in charge of keeping updated. | - |
SINGLE_VIEWS_PORTFOLIO_ORIGIN | ✓ | Name that should represent the source of any request received by the Single View Creator (in case the SVC receives requests from one System of Record, we suggest to use the name of said SoR). This will be used as an identifier for debugging, tracking requests and categorizing metrics; | |
- | |||
SINGLE_VIEWS_ERRORS_COLLECTION | ✓ | Name of a MongoDB CRUD you want to use as collection for single view errors. | - |
KAFKA_CONSUMER_GROUP_ID | - | @deprecated - in favor of KAFKA_GROUP_ID. The Kafka consumer group identifier | - |
KAFKA_GROUP_ID | - | defines the Kafka group id (it is suggested to use a syntax like {'{tenant}.{environment}.{projectName}.{system}.{singleViewName}.single-view-creator'} ) | - |
KAFKA_CLIENT_ID | - | The Kafka client identifier | - |
KAFKA_BROKERS_LIST | - | @deprecated - in favor of KAFKA_BROKERS. list of brokers the service needs to connect to | - |
KAFKA_BROKERS | - | list of brokers the service needs to connect to | - |
KAFKA_SASL_MECHANISM | - | The Kafka SASL mechanism to be used. Can be one of the following: "plain", "PLAIN", "scram-sha-256", "SCRAM-SHA-256", "scram-sha-512", "SCRAM-SHA-512" | plain |
KAFKA_SASL_USERNAME | - | username to use for logging into Kafka | - |
KAFKA_SASL_PASSWORD | - | password to use for logging into Kafka | - |
KAFKA_SVC_EVENTS_TOPIC | - | topic used to queue Single View Creator state changes (e.g. single view creation). This feature is deprecated in favor of KAFKA_SV_UPDATE_TOPIC and it will be removed soon | - |
SEND_BA_TO_KAFKA | - | true if you want to send to Kafka the before-after information about the update changes of the single view. This feature is deprecated in favor of ADD_BEFORE_AFTER_CONTENT using the 'sv-update' event and it will be removed soon | false |
KAFKA_BA_TOPIC | - | topic where to send the before-after messages which represent the single view document before and after a change. This feature is deprecated in favor of ADD_BEFORE_AFTER_CONTENT using the 'sv-update' event and it will be removed soon | - |
SEND_SV_UPDATE_TO_KAFKA | - | true if you want to send to Kafka the sv-update message about the update changes of the single view. Remember to also set the KAFKA_SV_UPDATE_TOPIC and the KAFKA_CLIENT_ID variables in order to use this feature. | false |
ADD_BEFORE_AFTER_CONTENT | - | true if you want to add the before and after content to the sv-update message, works only if SEND_SV_UPDATE_TO_KAFKA is set to true | false |
KAFKA_SV_UPDATE_TOPIC | - | topic where to send the sv-update message. Remember to also set the SEND_SV_UPDATE_TO_KAFKA and the KAFKA_CLIENT_ID variables in order to use this feature. | - |
UPSERT_STRATEGY | - | (v3.1.0 or higher) Strategy name or file path to update/insert Single View records, for more info checkout Upsert and Delete strategies. | replace |
DELETE_STRATEGY | - | (v3.1.0 or higher) Strategy name or file path to delete Single View records, for more info checkout Upsert and Delete strategies. | delete |
SINGLE_VIEWS_MAX_PROCESSING_MINUTES | - | (v3.4.2 or higher) time to wait before processing again a Projection with state IN_PROGRESS | 30 |
CA_CERT_PATH | - | The path to the CA certificate, which should include the file name as well, e.g. /home/my-ca.pem | - |
ER_SCHEMA_FOLDER | - | The path to the ER Schema folder, e.g. /home/node/app/erSchema | - |
AGGREGATION_FOLDER | - | The path to the Aggregation folder, e.g. /home/node/app/aggregation | - |
USE_AUTOMATIC | - | Whether to use the low code architecture for the Single View Creator service or not | - |
PROJECTIONS_CHANGES_SOURCE | - | System to use to handle the Projection Changes, supported methods are KAFKA or MONGO | MONGO |
READ_TOPIC_FROM_BEGINNING | - | Available from v.5.5.0 of the Single View Creator Plugin. If set to true, the Single View Creator will start reading from messages in the Projection Changes topic from the beginning, instead of the message with the latest commmitted offset. This will happen only the first time connecting to the topic, and it has effect only if PROJECTIONS_CHANGES_SOURCE is set to KAFKA. | false |
KAFKA_PROJECTION_CHANGES_TOPICS | - | Comma separated list of projection changes topics. Remember to set the PROJECTIONS_CHANGES_SOURCE to KAFKA to properly enable the consumer. | - |
KAFKA_SV_RETRY_TOPIC | - | Topic name for the Single View Retry mechanism. This enables the system which uses a Kafka consumer and a Kafka producer. | - |
KAFKA_SV_RETRY_MAX_ATTEMPTS | - | Number of attempts allowed for a failed Projection Change. Defining 0 will not retry the failed aggregations but will send the retry message to the KAFKA_SV_RETRY_TOPIC | 5 |
KAFKA_SV_RETRY_DELAY | - | Time in milliseconds in which the service buffers the retry messages before sending them. This is meant to reduce the impact on performace of sending the retry messages. | 5000 |
KAFKA_PROJECTION_UPDATE_TOPICS | - | Comma separated list of projection update topics | - |
SV_TRIGGER_HANDLER_CUSTOM_CONFIG | - | Path to the config defining SV-Patch actions | - |
USE_UPDATE_MANY_SV_PATCH | - | Use the MongoDB updateMany operation instead of the findOneAndUpdate with cursors in the sv patch operation. This will speed up the Single View creation/update process but it will not fire the kafka events of Single View Creation/Update. As a natural consequence, if enabled, the following environment vairables will be ignored: SEND_BA_TO_KAFKA , KAFKA_BA_TOPIC , SEND_SV_UPDATE_TO_KAFKA , KAFKA_SV_UPDATE_TOPIC , ADD_BEFORE_AFTER_CONTENT , KAFKA_SVC_EVENTS_TOPIC | false |
KAFKA_CONSUMER_MAX_WAIT_TIME_MS | - | (v6.2.1 or higher) The maximum amount of time in milliseconds the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by minBytes [1 byte] | 500 |
SV_UPDATE_VERSION | - | (v6.2.1 or higher) Define which version of the sv-update event should be emitted by the service. Accepted values are v1.0.0 and v2.0.0 . By default, for retro-compatibility, version v1.0.0 is employed | v1.0.0 |
CONTROL_PLANE_ACTIONS_TOPIC | - | @deprecated in favour of Runtime Management. Topic where the Single View Creator expects to receive the action commands of pause and resume . | - |
CONTROL_PLANE_KAFKA_GROUP_ID | - | @deprecated in favour of Runtime Management. The Kafka Consumer Group Identifier for the action commands client. | - |
KAFKA_PRODUCER_COMPRESSION | - | Starting from v6.5.0 , is possible to choose the type of compression that can be applied to pr-update messages. Possible values are: gzip , snappy or none | none |
KAFKA_SECURITY_PROTOCOL | - | Starting from v6.7.1 , is possible to choose the security established by the kafka client connection. Possible values are (case insensitive): plaintext , ssl , sasl_ssl , or sasl_plaintext | ssl |
CONTROL_PLANE_CONFIG_PATH | - | Starting from v6.7.0 , is possible to configure Runtime Management. More details on the dedicated section | |
CONTROL_PLANE_BINDINGS_PATH | - | Starting from v6.7.0 , is possible to configure Runtime Management. More details on the dedicated section |
If you want to enable any mechanism that uses Kafka in the Single View Creator, like the Single View Patch, remember to declare the following environment variables:
KAFKA_BROKERS
(required)KAFKA_CLIENT_ID
(required)KAFKA_GROUP_ID
(required if the mechanism needs a consumer)KAFKA_SASL_USERNAME
KAFKA_SASL_PASSWORD
KAFKA_SASL_MECHANISM
Consuming from Kafka
As you can see, the Single View Creator lets you configure what channel is used as input through the PROJECTIONS_CHANGES_SOURCE
environment variable. The default channel is MongoDB for the Projection Changes but this might not always be what you need. The service gives you the alternative to listen from Apache Kafka instead, this can be useful in two different cases:
- You want to use the Single View Trigger Generator to produce
sv-trigger
messages. - You want to configure the Single View Patch cycle which reads
pr-update
messsages from the Real-Time Updater.
In both of the cases you have to configure all the required environment variables related to kafka. First you need to configure the KAFKA_BROKERS
and KAFKA_GROUP_ID
, then you probably need to configure your authentication credentials with KAFKA_SASL_MECHANISM
, KAFKA_SASL_USERNAME
and KAFKA_SASL_PASSWORD
.
Once this is done remember to set the PROJECTIONS_CHANGES_SOURCE
environment variable to KAFKA
and to check out the configuration page of the system you need to complete the necessary steps.
Compression
Kafka messages can be sent using a particular compression encoding. The SVC is configured to accept messages having the following encodings:
This compression mechanisms can also be used by the microservice itself while producing kafka messages, by specifying, starting from version v6.5.0
, the environment variable KAFKA_PRODUCER_COMPRESSION
. Allowed values are gzip
, snappy
or none
: if the variable has not been specified, none
will be the default compression system used by the RTU.
Compression and decompression algorithm will always increase the delay between production and consumption of the message, hence it is not advised for strong real-time relying applications.
Handling Connections with Self Signed Certificates
Sometimes MongoDB or Kafka instances may have a TLS connection with a self-signed certification authority.
Since service version 3.9.0
, you can include additional certification authority certificates by providing the absolute path of a certification file in the environment variable CA_CERT_PATH
. This file should be included in your project as a Secret.
Error handling
When generating a Single View, every error that occurs is saved in MongoDB following the Single View Error format which satisfies the schema requirements of the CRUD service, so that you can handle those errors using the Console. It is highly recommended to use a TTL index to enable the automatic deletion of older messages, which can be done directly using the Console, as explained here.
Errors are categorized with a code that you will find in the errorType
property of the Single View Error record.
These are the currently emitted codes:
Code | Description | Retryable |
---|---|---|
NO_SV_GENERATED | Happens when the aggregation pipeline unexpectedly does not find the base projection record and thus returns empty | ✗ |
VALIDATION_ERROR | Happens when the result of an aggregation does not pass the validation defined by the user in the validator.js | ✗ |
ERROR_SEND_SVC_EVENT | Happens when service fails to send a Kafka message to notify a successful aggregation | ✗ |
SINGLE_VIEW_AGGREGATION_MAX_TIME | Happens when aggregation pipeline takes too long and goes over the maximum established time defined on the SINGLE_VIEWS_MAX_PROCESSING_MINUTES environment variable | ✓ |
UNKNOWN_ERROR | Happens with an unexpected error, such as a connection error with Apache Kafka or MongoDB | ✓ |
Notice the column retryable indicates which types of errors are sent to the retry queue when the Single View Retry system is enabled.
Single View Retry
This feature is supported from version 6.3.0
of the Single View Creator
The Single View Retry mechanism lets you configure a Kafka topic as a DLQ where to send the failed aggregation attempts. Once a message is sent to that topic it will be consumed by all the Single View Creators with the Single View Retry enabled and connected to that topic, then an attempt to aggregate the failed Single View record will be performed.
First thing you need to do to enable the mechanism is to define the KAFKA_SV_RETRY_TOPIC
alongside with all the Kafka related variables to make Kafka work in the service.
The messages sent to that topic have the Single View Trigger format, that's why, if you are already listening to Single View Trigger messages on Kafka as the main input of the service you can re-use the same exact topic.
To customize the system we also offer you the environment variables KAFKA_SV_RETRY_MAX_ATTEMPTS
and KAFKA_SV_RETRY_DELAY
. Check them out on the Environment Variables table.
Runtime Management
This feature is supported from version 6.7.0
of the Single View Creator.
By specifying the environment variables CONTROL_PLANE_CONFIG_PATH
, you enable the Single View Creator to receive and execute the commands from the Runtime Management.
By design, every service interacting with the Control Plane starts up in a paused state, unless the Control Plane has already resumed the data stream before.
Therefore, when the Single View Creator starts up, the aggregation process will not start automatically.
In this case, you just need to send a resume
command to the resource name (namely, the Single View name) managed by the Single View Creator.
You can read about the setup of the Single View Creator in its dedicated section.