Single View Creator
The Single View Creator is the service that keeps the Single View updated with data retrieved from Projections. This service is available as a plugin or as a template:
- plugin: it allows you to use the Single View Creator as a black-box. You just need to configure it through the Config Maps and environment variables
- template: it gives you access to the source code of the Single View Creator, which will be hosted on a Git repository. You will need to update its dependencies and maintain the code.
We strongly recommend using the plugin. The template is supposed to be used only for advanced use cases where the plugin cannot be used.
|Folder where configuration files are mounted
|Level to use for logging
|Port exposed by the service
|Identifies the type of projection changes that need to be read. It should be the same as the Single View name you want to update.
|a quantity of time in milliseconds, every X milliseconds the service wake up and check if there are some projections changes in
NEW state to work on. The service continue working until no more new projections changes are found, if so he goes to sleep for X milliseconds.
|MongoDB connection string where projections are stored. Must be a valid uri
|MongoDB connection string where single view must be stored. Must be a valid uri
|The db from where projections changes are read. If not set,
PROJECTIONS_MONGODB_URL is used.
|The db from where projections changes are read.
|The db from where projections are read. If not set,
PROJECTIONS_CHANGES_DATABASE is used.
|if you have set a custom projection change collection name from advanced, then set its name. Otherwise, it is
SYSTEM_ID is the id of the System of Record this single view creator is responsible for.
|The db from where single views are written.
|It must be equals to the Single View name the service is in charge of keeping updated.
|Name that should represent the source of any request received by the Single View Creator (in case the SVC receives requests from one System of Record, we suggest to use the name of said SoR). This will be used as an identifier for debugging, tracking requests and categorizing metrics;
|Name of a MongoDB CRUD you want to use as collection for single view errors.
|@deprecated - in favor of KAFKA_GROUP_ID. The Kafka consumer group identifier
|defines the Kafka group id (it is suggested to use a syntax like
|The Kafka client identifier
|@deprecated - in favor of KAFKA_BROKERS. list of brokers the service needs to connect to
|list of brokers the service needs to connect to
|The Kafka SASL mechanism to be used. Can be one of the following: "plain", "PLAIN", "scram-sha-256", "SCRAM-SHA-256", "scram-sha-512", "SCRAM-SHA-512"
|username to use for logging into Kafka
|password to use for logging into Kafka
|topic used to queue Single View Creator state changes (e.g. single view creation). This feature is deprecated in favor of KAFKA_SV_UPDATE_TOPIC and it will be removed soon
|true if you want to send to Kafka the
before-after information about the update changes of the single view. This feature is deprecated in favor of ADD_BEFORE_AFTER_CONTENT using the 'sv-update' event and it will be removed soon
|topic where to send the
before-after messages which represent the single view document before and after a change. This feature is deprecated in favor of ADD_BEFORE_AFTER_CONTENT using the 'sv-update' event and it will be removed soon
|true if you want to send to Kafka the
sv-update message about the update changes of the single view
|true if you want to add the before and after content to the
sv-update message, works only if
SEND_SV_UPDATE_TO_KAFKA is set to true
|topic where to send the
|(v3.1.0 or higher) Strategy name or file path to update/insert Single View records, for more info checkout Upsert and Delete strategies.
|(v3.1.0 or higher) Strategy name or file path to delete Single View records, for more info checkout Upsert and Delete strategies.
|(v3.4.2 or higher) time to wait before processing again a Projection with state IN_PROGRESS
|The path to the CA certificate, which should include the file name as well, e.g.
|The path to the ER Schema folder, e.g.
|The path to the Aggregation folder, e.g.
|Whether to use the low code architecture for the Single View Creator service or not
|System to use to handle the Projection Changes, supported methods are KAFKA or MONGO
|Available from v.5.5.0 of the Single View Creator Plugin. If set to true, the Single View Creator will start reading from messages in the Projection Changes topic from the beginning, instead of the message with the latest commmitted offset. This will happen only the first time connecting to the topic, and it has effect only if
PROJECTIONS_CHANGES_SOURCE is set to KAFKA.
|Comma separated list of projection changes topics. Remember to set the
KAFKA to properly enable the consumer.
|Topic name for the Single View Retry mechanism. This enables the system which uses a Kafka consumer and a Kafka producer.
|Number of attempts allowed for a failed Projection Change. Defining
0 will not retry the failed aggregations but will send the retry message to the KAFKA_SV_RETRY_TOPIC
|Time in milliseconds in which the service buffers the retry messages before sending them. This is meant to reduce the impact on performace of sending the retry messages.
|Comma separated list of projection update topics
|Path to the config defining SV-Patch actions
|Use the MongoDB
updateMany operation instead of the
findOneAndUpdate with cursors in the sv patch operation. This will speed up the Single View creation/update process but it will not fire the kafka events of Single View Creation/Update. As a natural consequence, if enabled, the following environment vairables will be ignored:
|(v6.2.1 or higher) The maximum amount of time in milliseconds the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by minBytes [1 byte]
|(v6.2.1 or higher) Define which version of the
sv-update event should be emitted by the service. Accepted values are
v2.0.0. By default, for retro-compatibility, version
v1.0.0 is employed
|Topic where the Single View Creator expects to receive the action commands of
|The Kafka Consumer Group Identifier for the action commands client.
Setting up both the variable
CONTROL_PLANE_KAFKA_GROUP_ID enables the communication between the Single View Creator and the Runtime Management.
This means that the Single View Creator will receive and execute the commands from the latter.
Check the Runtime Management section of this page for more information.
If you want to enable any mechanism that uses Kafka in the Single View Creator, like the Single View Patch, remember to declare the following environment variables:
KAFKA_GROUP_ID(required if the machanism needs a consumer)
Consuming from Kafka
As you can see, the Single View Creator lets you configure what channel is used as input through the
PROJECTIONS_CHANGES_SOURCE environment variable. The default channel is MongoDB for the Projection Changes but this might not always be what you need. The service gives you the alternative to listen from Apache Kafka instead, this can be useful in two different cases:
- You want to use the Single View Trigger Generator to produce
- You want to configure the Single View Patch cycle which reads
pr-updatemesssages from the Real-Time Updater.
In both of the cases you have to configure all the required environment variables related to kafka. First you need to configure the
KAFKA_GROUP_ID, then you probably need to configure your authentication credentials with
Once this is done remember to set the
PROJECTIONS_CHANGES_SOURCE environment variable to
KAFKA and to check out the configuration page of the system you need to complete the necessary steps.
Handling Connections with Self Signed Certificates
Sometimes MongoDB or Kafka instances may have a TLS connection with a self-signed certification authority.
Since service version
3.9.0, you can include additional certification authority certificates by providing the absolute path of a certification file in the environment variable
CA_CERT_PATH. This file should be included in your project as a Secret.
When generating a Single View, every error that occurs is saved in MongoDB following the Single View Error format which satisfies the schema requirements of the CRUD service, so that you can handle those errors using the Console. It is highly recommended to use a TTL index to enable the automatic deletion of older messages, which can be done directly using the Console, as explained here.
Errors are categorized with a code that you will find in the
errorType property of the Single View Error record.
These are the currently emitted codes:
|Happens when the aggregation pipeline unexpectedly does not find the base projection record and thus returns empty
|Happens when the result of an aggregation does not pass the validation defined by the user in the
|Happens when service fails to send a Kafka message to notify a successful aggregation
|Happens when aggregation pipeline takes too long and goes over the maximum established time defined on the
SINGLE_VIEWS_MAX_PROCESSING_MINUTES environment variable
|Happens with an unexpected error, such as a connection error with Apache Kafka or MongoDB
Notice the column retryable indicates which types of errors are sent to the retry queue when the Single View Retry system is enabled.
Single View Retry
This feature is supported from version
6.3.0 of the Single View Creator
The Single View Retry mechanism lets you configure a Kafka topic as a DLQ where to send the failed aggregation attempts. Once a message is sent to that topic it will be consumed by all the Single View Creators with the Single View Retry enabled and connected to that topic, then an attempt to aggregate the failed Single View record will be performed.
First thing you need to do to enable the mechanism is to define the
KAFKA_SV_RETRY_TOPIC alongside with all the Kafka related variables to make Kafka work in the service.
The messages sent to that topic have the Single View Trigger format, that's why, if you are already listening to Single View Trigger messages on Kafka as the main input of the service you can re-use the same exact topic.
To customize the system we also offer you the environment variables
KAFKA_SV_RETRY_DELAY. Check them out on the Environment Variables table.
This feature is supported from version
6.4.0 of the Single View Creator
By specifying the environment variables
CONTROL_PLANE_KAFKA_GROUP_ID you enable the Single View Creator to receive and execute the commands from the Runtime Management.
By design, every service interacting with the Control Plane starts up in a paused state, unless a
resume command that can apply to the specific service is already present in the topic. Therefore, when the Single View Creator starts up, the aggregation process will not start automatically. In this case, you just need to send a
resume command to the resource name (namely, the Single View name) managed by the Single View Creator.
Read the Interacting with the Frontend section of the Runtime Management documentation page to learn more about the Control Plane and how to use it.