Skip to main content
Version: 13.x (Current)

Main Entrypoint

lc39 does some assumption on how the main entrypoint of your service can be layed out.
This is in order to correctly import and validate the functions and data passed to it and for correctly create and launch the Fastify instance.

The best way to use lc39 is using it by CLI, running it with a command like:

lc39 ./index.js --env-path .env

with configured options written here.

It's also possible to use lc39 as a function:

const lc39 = require('@mia-platform/lc39')

async function service(fastify) {
fastify.get('/', async (req, reply) => {
return { hello: 'world' }

lc39(service, {
logLevel: 'silent',
envVariables: {
swaggerDefinition: {
info: swaggerInfo,

Main Exported Function

Your service must export a function for its module. The function can have a single parameter or two:

  • fastify: the instance of the fastify server created by lc39
  • options: the optional parameter, this will contain the object passed to fastify for setting up your module
module.exports = async function service(fastify) {
fastify.get('/', async (req, reply) => {
return { hello: 'world' }

As you can see the function must be declared async and must be exported as the root of the module.

Fastify Sensible

The lc39 will add the fastify-sensible plugin so you will be able to use all its affordance in your module implementation, so httpErrors and asserts are only one call away.

Custom Status Routes

With lc39 your service will automatically inherit three fixed routes for getting infomation on the service:

  • GET /-/healthz
  • GET /-/ready
  • GET /-/check-up

The first route can be used as a probe for load balancers, status dashboards and as a helthinessProbe for Kubernetes.
By default, the route will always response with an OK status and the 200 HTTP code as soon as the service is up.

The second route can be used as a readinessProbe for Kubernetes.
As the first route, the default implementation of this endpoint will always respond OK status and the 200 HTTP code as soon as the service is up.

The third route can be used as check-up route, to verify if all the functionalities of the service are available or not. The purpose of this route should be to check the availability of all the dependencies of the service and reply with a check-up of the service.
As the others, the default implementation of this endpoint will always respond OK status and the 200 HTTP code as soon as the service is up.

The default implementations are a nice placeholder until you can add some logic tied to your service.
For doing so you can add two new module.exports to your main entrypoint that will be used to customize the behavior.

module.exports.readinessHandler = async function readinessHandler(fastify) {
// Add your custom logic for /-/ready here
return { statusOK: true }
module.exports.healthinessHandler = async function healthinessHandler(fastify) {
// Add your custom logic for /-/healthz here
return { statusOK: true }
module.exports.checkUpHandler = async function checkUpHandler(fastify) {
// Add your custom logic for /-/check-up here
return { statusOK: true }

These functions must return an object that will customize the response of the server. The only property needed is statusOK that contains a boolean; true for returning a 200 response and false for returning 503.
Additionally you can add any property you want and it will be appended to the response. If you add the name and/or version key your value will override the default ones that will be parsed from package.json.
Both of these endpoints conform to the JSON schema that you can find here.

Both of this endpoints are set to permanently run on log level silent for decreasing the amount of noise in the logs during the deployment.

Prometheus Metrics

By default lc39 exposes /-/metrics endpoint for Prometheus. The response body contains process, garbage collection and http information

Anyway you can define your custom metrics in the following way:

module.exports = async function plugin(fastify) {
fastify.get('/', function (request, reply) {
// ...

module.exports.getMetrics = function getMetrics(prometheusClient) {
const myCounter = new prometheusClient.Counter({
name: 'custom_metric',
help: 'Custom metric',
return {

It is possible to add options to the metrics plugin to change the default behavior. lc39 uses fastify-metrics under the hood, so it is possible to configure all the properties except the exposed endpoint.

module.exports.options = {
metrics: {
enableRouteMetrics: false

Exposed Swagger Documentation

By default lc39 will import the fastify-swagger module for exposing the service documentation following the OpenAPI 3 specification. In order to expose API documentation following the Swagger 2.0 specification the service should export swaggerDefinition with openApiSpecification valued as 'swagger'.

module.exports.swaggerDefinition = {
openApiSpecification: 'swagger',

If you want to customize more the generated OpenAPI file, you can add the following object export that can be accepted for the dynamic implementation of fastify-swagger.

module.exports.swaggerDefinition = {
info: {
title: 'Service title',
description: 'This description of the service functionality',
version: 'v1.0.0',
consumes: ['application/json'],
produces: ['application/json'],

If you don’t export this object lc39 will automatically create this for you using the data found in the package.json of your project.

If you need to edit the schema used to generate the swagger, you can use transformSchemaForSwagger to do it.

module.exports = async function plugin(fastify) {
fastify.get('/', {
schema: {
querystring: {
label: { type: 'string' },
}, function returnConfig(request, reply) {
reply.send({ })

module.exports.transformSchemaForSwagger = ({schema, url}) => {
const {
const converted = {}
if (querystring) {
converted.querystring = convertQuerystringSchema(querystring)
return {
schema: converted,

This method is called for each route. The schema parameter is the schema object set to the route. transformSchemaForSwagger is only called on the first time that /documentation/json it's visited.

OpenTelemetry tracing [experimental]

The tracing is experimental and could change in a breaking also with minor changes

lc39 allow to enable the tracing of the application using the OpenTelemetry SDK. To enable it, use the --enable-tracing option of the CLI. It is possible to change the configuration of the SDK using environment variables (here the docs of the node sdk).

Some of the env variables useful to configure the service. A full list is available here (check if the SDK supports them):

  • OTEL_TRACES_EXPORTER (default is otlp): List of exporters to be used for tracing, separated by commas. Options include otlp, jaeger, zipkin, and none.
  • OTEL_PROPAGATORS: Propagators to be used as a comma-separated list. e.g. b3
  • OTEL_SERVICE_NAME (required): the service name. If not set, the service is set as unknown in trace.