MongoDB Reader is a service that allows to expose APIs that execute aggregation pipelines on MongoDB returning the result in several formats such as
The service can be configured by creating custom profiles, these profiles can be provided as files in the microservice file system
The environment variables of this service are the following:
MONGODB_URL: it is required. It is the connection string for MongoDB. Note that when an API performs heavy read operation on MongoDB, sometimes can be useful to modify the read preferences. This must be specified in the connection string.
ENABLE_PROFILER: it is not required. If set to
true, the service will log (at info level) the timing info for the first and the last record returned by the cursor.
PROFILE_DIR: path to the profiles directory, can be any path were you whish to mount configuration files.
Other required environment variables are the one required by Mia-Platform Node.js Service Library.
profiles folder, many configuration files can be provided, each profile file specify a different route and MongoDB aggregation pipeline.
For each of these files, the service will expose three APIs:
GET /<name>/json: gives the result of the pipeline in
jsonformat (always an array)
GET /<name>/jsonl: gives the result of the pipeline in
GET /<name>/csv: gives the result of the pipeline in
- name: the string that defines the endpoint (e.g. for the example below the exposed APIs will be of the form
- parameters: an array of objects. Each object defines a query parameter of the API. In particular, it can be defined the name and the type of the parameter and whether it is required or not.
- pathParameters: an object that defines the path parameters.
- inputFormatter: a function that takes two objects as input representing the query and path parameters, respectively. The scope of this function is to format the input before passing it to the
- query: an object having the following fields:
- collectionName: the name of the collection on which to execute the aggregation.
- aggregationFunction: a function that takes as input the output of the
inputFormatterand returns the aggregation pipeline, i.e. an array of stages.
- castInterpolatedPipeline: This function is deprecated and it has to be left as in the example above unless the deprected JSON file configuration described below is used.
- outputFormatter: a function that takes three parameters as input: one document of the aggregation result, the output format (
csv) and the output of the
inputFormatterfunction. The scope of this function is to format the output of the aggregation as one prefers.
The aggregation pipeline can also be written in JSON files like the one that follows:
In this case the input parameters will be interpolated with the strings surrounded by
aggregationFunction field must be replaced with the
aggregationFilePath field which is a string specifying the JSON file path.
We consider this configuration deprecated.