Get scoped Testresult Stats
This API resource allows you to get statistics about the testresults of either a single test or for multiple tests of a suite.
Backgrounds
BiG EVAL is able to run a standalone test or multiple tests of a suite at once when you push the play-button or you use an automated way to start a test run. This process is called a “run” and the run-object serves as a container for all testresults that were collected during the run.
Each run-object has a scope-attribute, that allows to separate runs that were created when executing a single test from runs that were created when executing multiple tests of a suite.
- Scope 2 = Run was created for a single test.
- Scope 3 = Run was created for a suite.
Which suite or test was executed, is stored in the scopeIdentifier-attribute of the run-object.
Request
URL
/api/v1/default/statistics/testresultsscoped?runScope={runScope}&runScopeIdentifier={runScopeIdentifier}&skip={skip}&take={take}
Example:
The following request returns statistics about the last five runs of the suite with the ID 3./api/v1/default/statistics/testresultsscoped?runScope=3&runScopeIdentifier=3&skip=0&take=5
Verb
GET
URL-Parameters
Following are the parameters used in the URL or the QueryString showed by a placeholder with curly brackets. Replace the full parameter with the actual value including the curly brackets.
Parameter | Description |
---|---|
runScope | Define either the number 2 for a run of a single test or the number 3 for a run of a suite. |
runScopeIdentifier | Depending on the value of the runScope-parameter, you define the ID of the test or the suite whose stats should be returned. |
skip | The amount of stats that should be skipped in the response. E.g. a value of 3 means that the stats of the last three runs will not be returned. |
take | The amount of stats that should be returned in the response. |
Header
Parameter | Description |
---|---|
Authorization | The Access-Token as explained in Authentication. |
Body
not needed
Response
{ "status": "OK", "items": [ { "runId": 10386, "timestamp": "2018-09-10T07:42:27.0129498", "status": "FINISHED", "runScopeId": 3, "runScopeName": "30 - Scripting", "succeededCount": 37, "failedCount": 1, "exceptionsCount": 0, "notEnoughProbesCount": 0, "differentDimensionalityCount": 0, "notEvaluatedCount": 0, "executingCount": 0, "totalCount": 38 } ], "total": 8, "skip": 0, "take": 1, "totalUnscoped": 41 }
Element | Type | Description |
---|---|---|
status | String | Returns the status of the API request. OK means that the request could be completed successfully. |
total | Int64 | The amount of records available within the filtered scope. |
skip | Int32 | The amount of records that were skipped to present the response. |
take | Int32 | The amount of records that were returned in the result. |
totalUnscoped | Int64 | The amount of records available without filtering on a specific scope. |
The items-Array contains as many statistics elements as requested by the take-parameter. Each element has the following attributes.
Element | Type | Description |
---|---|---|
runId | Int64 | The ID of the run whose statistics follow. |
timestamp | DateTime | The point in time when the last status of the run was saved. |
status | String | The current status of the run. The possible values are “FINISHED” or “RUNNING”. |
runScopeId | Int32 | 2 = When a run was executed for a single test only; 3 = When a run was executed for a suite. |
runScopeName | String | The name of either the test or the suite depending on the requested run scope. |
succeededCount | Int32 | The amount of tests that passed the testing conditions. |
failedCount | Int32 | The amount of tests that didn’t pass the testing conditions. |
exceptionsCount | Int32 | The amount of tests that reported an exception. |
notEnoughProbesCount | Int32 | The amount of tests that had not enough probes to be executed. Depending on the testmethod, a test needs a specific or a minimum amount of probes to run. For example: Comparing data needs at least two probes. Otherwise it fails. |
differentDimensionalityCount | Int32 | The amount of tests that failed due to probes with different dimensionality. This usually happens when a data-comparison is configured wrong. |
notEvaluatedCount | Int32 | The amount of tests that couldn’t be evaluated for any other reason. |
executingCount | Int32 | The amount of tests that are actually in executing state. So these are not finished yet. |
totalCount | Int32 | The total amount of tests run within the run. |
Example: PowerShell
# The base-url of the BiG EVAL API and instance. $bigevalUrl = "https://mybigevalserver/" $baseUrl = $bigevalUrl+"api/v1/default/" # Note that you need to request the AccessToken first. We do not show that in this example. $accessToken = "123123123" $currentResponse = Invoke-RestMethod -Uri ($baseUrl + "statistics/testresultsscoped?runScope=3&runScopeIdentifier=3&skip=0&take=1") -Headers @{"Authorization"="Bearer $accessToken"} # Store the last result stats in a variable for easier access. $currentResult = $currentResponse.items[0] # Write some statistics to the console Write-Host ("Run-ID: " + $currentResult.runId) Write-Host ("Status: " + $currentResult.status) Write-Host ("Tests run: " + $currentResult.totalCount) Write-Host ("Tests succeeded: " + $currentResult.succeededCount) Write-Host ("Tests failed: " + $currentResult.failedCount) Write-Host ("Tests excepted: " + $currentResult.exceptionsCount) Write-Host ("Success-Rate: " + ($currentResult.succeededCount / $currentResult.totalCount * 100) + "%")