IBM CICS Performance Series: FiTeq Authenticator Benchmark
An IBM Redpaper publication
Published 11 August 2014
IBM Form #: REDP-5114-00
Rate and comment
Authors: John Burgess, Chris Hui, Simon Ma, John Weber
FiTeq is an IBM® Business Partner that specializes in fraud prevention technologies for the payments industry. This IBM Redpaper™ publication records the methodologies and results of a performance benchmark using the FiTeq Authenticator, which is a component of FiTeq's family of Secure Transaction Solutions.
The FiTeq Authenticator is an IBM CICS® enabled application that was run under CICS Transaction Server for z/OS® V5.1 in this benchmark. The performance benchmark was conducted as a joint venture between IBM and FiTeq in January 2014.
In summary, the following FiTeq Authenticator application performance characteristics were demonstrated:
- A scalable solution: CPU usage scales linearly as the number of transactions per second increases.
- Cost-effective: Approximately only 500 microseconds of CPU per transaction were used for the single configuration.
- Efficient: Average response times below 20 milliseconds per transaction were maintained at a transaction rate exceeding 8,000 per second.
These benchmark test results confirmed and validated that the FiTeq Authenticator is, in conjunction with the performance, reliability, and scalability provided by IBM z/OS and CICS architectures and associated hardware, fully capable of satisfying the requirements of all top financial institutes.
As a by-product of the FiTeq Authenticator performance test, the IBM World-Wide Solutions-Cross ISV Sizing team developed a FiTeq Authenticator Sizing Tool to forecast system requirements based on the transactions per second (TPS) and other system requirements of any future FiTeq client. As a result, the IBM pre-sale team and the FiTeq marketing team will be able to recommend the best fit and most cost-effective IBM software and hardware solution for a particular FiTeq client.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations, such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.
Table of contents
Chapter 1. Introduction
Chapter 2. Benchmark objectives
Chapter 3. Benchmark application topology
Chapter 4. Scaling CICS applications
Chapter 5. Environment
Chapter 6. Measurement methodology
Chapter 7. Terms used
Chapter 8. Single CICS region configurations
Chapter 9. Single region results
Chapter 10. Multiple CICS region configurations
Chapter 11. Multiple region results
Chapter 12. Tuning and configuration considerations
Chapter 13. Conclusions
Appendix A. Single and multiple region scaling
Others who read this publication also read
Follow IBM Redbooks
Follow IBM Redbooks