IBM CICS Performance Series: FiTeq Authenticator Benchmark

A draft IBM Redpaper publication

Abstract

FiTeq is an IBM Business Partner that specializes in fraud prevention technologies for the payments industry. This IBM® Redpaper™ publication records the methodologies and results of a performance benchmark using the FiTeq Authenticator, which is a component of FiTeq's family of Secure Transaction Solutions.

The FiTeq Authenticator is a CICS enabled application that was run under CICS Transaction Server for z/OS V5.1 in this benchmark. The performance benchmark was conducted as a joint venture between IBM and FiTeq in January 2014.

In summary, the FiTeq Authenticator application performance characteristics demonstrates to be:
> A scalable solution: CPU usage scales linearly as the number of transactions per second increases.
> Cost-effective: Using only approximately 500 microseconds of CPU per transaction for the single configuration.
> Efficient: Average response times below 20 milliseconds per transaction maintained at a transaction rate exceeding 8000 per second.

These benchmark test results have confirmed and validated the FiTeq Authenticator is, in conjunction with the performance, reliability, and scalability provided by IBM z/OS® and CICS® architectures and associated hardware, fully capable of satisfying the requirements of all top financial institutes.

As a by-product of the FiTeq Authenticator performance test, the IBM World-Wide Solutions-Cross ISV Sizing team has developed a FiTeq Authenticator Sizing Tool to forecast system requirements based on the TPS and other system requirements of any future FiTeq client. As a result, the IBM pre-sale team and the FiTeq marketing team will be able to recommend the best fit and most cost-effective IBM software and hardware solution for a particular FiTeq client.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

Table of contents

1. Introduction
2. Benchmark objectives
3. Benchmark application topology
4. Scaling CICS applications
5. Environment
6. Measurement methodology
7. Terms used
8. Single CICS region configurations
9. Single region results
10. Multiple CICS region configurations
11. Multiple region results
12. Tuning and configuration considerations
13. Conclusions
Appendix A. Single and multiple region scaling


Disclaimer

These pages are Web versions of IBM Redbooks- and Redpapers-in-progress. They are published here for those who need the information now and may contain spelling, layout and grammatical errors.

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. Your feedback is welcomed to improve the usefulness of the material to others.

IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a customer responsibility and depends upon the customer's ability to evaluate and integrate them into the customer's operational environment.


Profile

Last Update
30 May 2014

Planned Publish Date
18 July 2014


Rating:
(based on 2 reviews)


Author(s)

ISBN-10
0738453838

ISBN-13
9780738453835

IBM Form Number
REDP-5114-00

Number of pages
60

Feedback