Usage
The integration tests service exposes a single HTTP endpoint to launch a single integration test.
POST /tests/
Launch a single integration test using Newman runner.
Request
The endpoint expects a multipart requests with the following parts:
Name | Required | Type | Description |
---|---|---|---|
tagName | Yes | field | Name of the tag associated to the test. |
testId | Yes | field | Unique identifier of the test. |
userEmail | No | field | Email of the user that run the tests. |
collection | Yes | file | A Postman collection v2 exported as JSON file. |
environment | No | file | A Postman environment exported as JSON file. |
assets | No | file | A .zip file containing additional test assets. |
You can add more fields, but not files, to the request, that are passed to Newman as environment variables and can be referenced inside your Postman collection using handlebars.
If the request is valid, the endpoint creates a temporary working directory, saves all the files inside it and launches Newman with the following options:
Name | Value | Description |
---|---|---|
testId | Yes | Unique identifier of the test. |
collection | Yes | A Postman collection v2 exported as JSON file. |
environment | No | A Postman environment exported as JSON file. |
After Newman has completed all the tests with the same tagName
, it creates a new tag on GitLab; the name of the tag is the same as tagName
, but the -noartifacts
suffix is replaced with -artifacts
.
Response
If the request is processed correctly, the endpoint returns an HTTP 202 (Accepted) response as soon as Newman starts performing the integration test, since this operation could take a lot of time, with a body looking like this:
{
"tag": "v1.0.0-artifacts"
}
You can retrieve the HTML reports generated during the tests by calling the GET /reports/:id
endpoint and passing as id
the name of the tag received in the response (v1.0.0-artifacts
in the example above).
If the request is missing some required fields or files or contains additional files, the endpoint returns an HTTP 400 (Bad Request) response with a payload looking like this:
{
"statusCode": 400,
"error": "Bad Request",
"message": "A human-readable error message with additional details",
}
GET /reports/:id
This endpoint allows you to retrieve all the HTML reports generated by the tests linked to a given tag.
After the endpoint is called, by default, all the references to the tests are removed from the cache and the local working directories.
If you need to keep the tests reports in memory, you can call the API with the optional query parameter removeFiles
set to false
(default true
)
Response
If the request is processed correctly, the endpoint returns an HTTP 200 and a payload with the zip file, containing all the reports, each one named according to the original testId
and having the .html
extension.
GET /test-executions/:id
This endpoint allows you to retrieve all the results of the tests related to a given tag.
params
- id: id of the test group (i.e: the tag returned by the POST,
v1.0.0-artifacts
in the example above)
query params
| Name | Required | Type | Description |
|---------------|----------|---------|--------------------------------------------------------------------------------------------------------------|
| tagName
| Yes | string | Name of the tag associated to the test. |
| removeFiles
| No | boolean | whether to delete reports file from memory or not (default true) |
You can retrieve the tests results produced during the tests by calling the GET /test-executions/:id
endpoint and passing as id
the name of the tag received in the response (v1.0.0-artifacts
in the example above).
Response
If the request is processed correctly, the endpoint returns an HTTP 200 and a payload with the tests run information formatted as in the following example.
{
version: '1.0.0', // the tag name without the postfix `-artifacts`
userEmail, // the mail defined as a multipart field when calling the POST API (if specified, undefined if not)
tests: [ // list of tests information
{
issues: ['ISSUE-42'], // list of the issues associated with the given test
success: false, // result of the test (true|false)
errors: [ // list of the errors occurred (if any) during the test run
{
name: 'AssertionError',
index: 0,
test: 'Check response status code',
message: "expected { Object (id, _details, ...) } to have property 'code'",
stack: "AssertionError: expected { Object (id, _details, ...) } to have property 'code'\n at Object.eval sandbox-script.js:1:1)",
}
],
},
],
}