TermAppISONFT: Orkhestra Cross Test Performance Summary

## Loading required package: lattice
## 
## Attaching package: 'BSDA'
## The following object is masked from 'package:datasets':
## 
##     Orange
##  : starts:  Mon Jul 15 22:47:55 2024

Introduction

There are three elements to this cross-test performance summary report. The first shows a summary of the percentage success of each function/operation/call. The second element compares the performance of the latest test(s) to the pooled performance of previous tests for each of the functions/operations/calls and outcomes. And the third element of this report compares the performance by function/operation/call by the outcome across multiple NFT result sets.

The percentage successful outcomes are presented as a summary for the latest test(s). This summary is ranked in increasing order of the percentage of good outcomes of that scenario against all attempts of that scenario in the test.

As a summary and for ranking the performance of the last tests results against previous test results, for each function/operation/call and outcome, the tests in the last test session are compared against the tests in previous sessions. This is accomplished by pooling the sample mean of the response times and pooling the sample standard deviations of response times across all prior tests, and then comparing the latest test(s) with the previous tests using tsum.test. The results are by ranked by the corresponding \(p\)-values in increasing order and tabulated. For each function/operation/call request, three comparison tests are made: The first determines a measure of the difference between the respective response time distributions; the second determines a measure of those response times that could be considered worse in the latest test(s) as compared to the pooled previous test; and the third determines a measure of those response times that could be considered better in the latest test(s) as compared to the pooled previous tests.

In addition to tabulating the response time means and standard deviations against function/operation/call and outcomes across the tests, box-plots are produced to visually compare the performance/outcomes over the various tests. In each case, the box-plots show up the 15 most extreme functions/operations/calls that are most different to the historic response time distributions, and then a box-plot each showing those that have response times greatest increase and decrease in their response times when compared to their respective historic counterparts.

The last section of the report compares the performance by function/operation/call by the outcome across multiple NFT result sets. The summary results have been taken from the application performance sections of the individual NFT sessions. The Resp value is the sample mean of the response times in seconds and the StdDevis the corresponding sample standard deviation. In each case only those values where the customer or business function arrival rate did not materially exceed the peak observed/production target are included in the calculation.

Summary of successful outcomes for latest testing

Test 1 - TermAppISONFT - TermAppISO

The following table is a summary of the outcomes of test 1 (TermAppISONFT - TermAppISO), showing the percentage of functions/operations/calls considered successful. The scenarios are shown from worst percentage good outcomes to best:

StartTime TestNumber Label Description Basename Outcome Count Percent Resp StdDev
2024-07-15 15:36:01 1 TermAppISONFT TermAppISO authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 126212 100 0.204 0.014
2024-07-15 15:36:01 1 TermAppISONFT TermAppISO transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 125986 100 0.103 0.010

Comparison of latest tests to pooled previous tests

The last test date in the summary data is used to delimit the prior tests from the tests in the last test session. This section compares the tests performed on testdate to the tests that ran in sessions prior to this date. Comparisons are made only for the successful outcomes, and only the performance data where the rate in each of the tests included in the comparison did not exceed the target rate is included in the comparison.

Differences in response time distributions

The following show the comparisons of the good outcomes of the tests performed on 2024-07-15 as compared to the tests performed before this date. The table is ranked in increasing order of the \(p\)-values from the corresponding Welch Modified Two-Sample t-Test (two.sided), starting from the function/operation/call where the response time distribution differences are the greatest. Results are only shown for which the \(p\)-value is less than or equal to the cutoff value (\(\alpha\) = 0.05).

Test 1 - TermAppISONFT - TermAppISO

The following compares the responses time differences from the test started at 2024-07-15 15:36:01 to the tests from previous test sessions.

Basename Outcome Count Resp StdDev PrevCount PrevMean PrevStdDev pvalue.d
authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 126212 0.204 0.014 280686 0.233 0.429 0
transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 125986 0.103 0.010 280295 0.121 0.350 0
## Loading required package: grid

Key Basename
A authorisation_request_1100
B transaction_advice_response_1230

Increases in the response times

There were no significant response time increases when comparing the test(s) in the last test session to tests from earlier test sessions for any of the items.

Decreases in the response times

The following show the comparisons of the good outcomes of the tests performed on 2024-07-15 as compared to the tests performed before this date. The table is ranked in increasing order of the \(p\)-values from the corresponding Welch Modified Two-Sample t-Test (less), starting from the function/operation/call where the response time decreases are the greatest. Results are only shown for which the \(p\)-value is less than or equal to the cutoff value (\(\alpha\) = 0.05).

Test 1 - TermAppISONFT - TermAppISO

The following compares the responses time decreases from the test started at 2024-07-15 15:36:01 to the tests from previous test sessions.

Basename Outcome Count Resp StdDev PrevCount PrevMean PrevStdDev pvalue.l
authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 126212 0.204 0.014 280686 0.233 0.429 0
transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 125986 0.103 0.010 280295 0.121 0.350 0

Key Basename
A authorisation_request_1100
B transaction_advice_response_1230

Comparison across all tests individually

This section compares the performance between the NFT tests to date for each of the functions/operations/calls included in the corresponding test.

In the box-plots that follow, in each case, the centre is the sample mean response time value in seconds for that particular function/operation/call qualified by the outcome of that function/operation/call. The lower edge of the box is the corresponding sample mean response time value less the sample standard deviation, and the upper edge of the box is the corresponding sample mean response time value plus the standard deviation. The minimum and maximum values are calculated by taking two times the standard deviation in a similar manner.

Performance of authorisation_request_1100 with outcome: AUTHORISATION_RESPONSE_1110_OK

The following table shows the performance descriptive statistics for authorisation_request_1100 when the outcomes are AUTHORISATION_RESPONSE_1110_OK.

TestDate Description Basename Outcome Count Percent Resp StdDev
2023-10-11 TermAppISO authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 17788 100.000 0.426 0.920
2023-10-12 TermAppISO authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 18615 99.979 0.431 1.403
2024-03-20 TermAppISO authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 244283 100.000 0.204 0.011
2024-07-15 TermAppISO authorisation_request_1100 AUTHORISATION_RESPONSE_1110_OK 126212 100.000 0.204 0.014

Performance of authorisation_request_1100 with outcome: timeout

The following table shows the performance descriptive statistics for authorisation_request_1100 when the outcomes are timeout.

TestDate Description Basename Outcome Count Percent Resp StdDev
2023-10-12 TermAppISO authorisation_request_1100 timeout 4 0.021 99.999 0

Performance of transaction_advice_response_1230 with outcome: timeout

The following table shows the performance descriptive statistics for transaction_advice_response_1230 when the outcomes are timeout.

TestDate Description Basename Outcome Count Percent Resp StdDev
2023-10-12 TermAppISO transaction_advice_response_1230 timeout 1 0.005 99.999 0

Performance of transaction_advice_response_1230 with outcome: TRANSACTION_ADVICE_RESPONSE_1230_OK

The following table shows the performance descriptive statistics for transaction_advice_response_1230 when the outcomes are TRANSACTION_ADVICE_RESPONSE_1230_OK.

TestDate Description Basename Outcome Count Percent Resp StdDev
2023-10-11 TermAppISO transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 17708 100.000 0.224 1.042
2023-10-12 TermAppISO transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 18542 99.995 0.263 0.901
2024-03-20 TermAppISO transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 244045 100.000 0.103 0.006
2024-07-15 TermAppISO transaction_advice_response_1230 TRANSACTION_ADVICE_RESPONSE_1230_OK 125986 100.000 0.103 0.010

Performance of transaction_request_2200 with outcome: POSIPMON_EXTENDED_FUNCTION_LOGON_RESPONSE_2210

The following table shows the performance descriptive statistics for transaction_request_2200 when the outcomes are POSIPMON_EXTENDED_FUNCTION_LOGON_RESPONSE_2210.

TestDate Description Basename Outcome Count Percent Resp StdDev
2023-10-11 TermAppISO transaction_request_2200 POSIPMON_EXTENDED_FUNCTION_LOGON_RESPONSE_2210 17813 100 0 0

Session details

sessionInfo();
## R version 3.6.0 (2019-04-26)
## Platform: x86_64-redhat-linux-gnu (64-bit)
## Running under: CentOS Linux 7 (Core)
## 
## Matrix products: default
## BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] grid      stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
## [1] pander_0.6.3    doBy_4.6.7      cmlrutils_1.19  XML_3.98-1.20  
## [5] scales_1.1.1    ggplot2_3.3.2   BSDA_1.2.0      lattice_0.20-38
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.7       highr_0.8        compiler_3.6.0   pillar_1.4.6    
##  [5] rmdformats_1.0.0 class_7.3-15     tools_3.6.0      digest_0.6.25   
##  [9] evaluate_0.14    lifecycle_0.2.0  tibble_3.0.3     gtable_0.3.0    
## [13] pkgconfig_2.0.3  rlang_0.4.7      Matrix_1.2-17    yaml_2.2.1      
## [17] xfun_0.17        e1071_1.7-4      withr_2.2.0      stringr_1.4.0   
## [21] dplyr_1.0.2      knitr_1.30       generics_0.0.2   vctrs_0.3.2     
## [25] tidyselect_1.1.0 glue_1.4.1       R6_2.4.1         rmarkdown_2.6   
## [29] bookdown_0.20    farver_2.0.3     tidyr_1.1.2      purrr_0.3.4     
## [33] magrittr_1.5     backports_1.1.8  MASS_7.3-51.4    ellipsis_0.3.1  
## [37] htmltools_0.5.0  colorspace_1.4-1 Deriv_4.0.1      labeling_0.3    
## [41] stringi_1.5.3    munsell_0.5.0    broom_0.7.0      crayon_1.3.4