The “PERFORMANCE” of the system is believed to be synonymous with Response Time – the number of elapsed seconds that it takes for them to receive an initial reply screen from a system after they submit a form
Its definitely a very odd belief – anything that affects the timelines or the prouctivity of the user will be thrown back to the Performance backyard.
The other important aspect which measures the performance at Business Level is Throughput – how many completed units of work (orders, customer service requests, investigations) can be processed per minute (or hour, or workday)
The catch over here is, “It does not mean that if the Response Time of the System is excellent!!..then so will be its Throughput”
For example, in an initial design, assume that completing a certain business transaction that requires 24 interactions (each averaging 30 seconds). If this transaction can be redesigned to require only 18 interactions (with comparable response and user input times), the new design improves user productivity and throughput.
These parameters “Response Time” & “Threshold” become handy in measuring the average, peak or the threshold velues pertaining to the performance of the system.
Every Application in the market boasts about supporting tens of thousands of busy users, millions of work objects, and hundreds of thousands of interactions per hour which are in use worldwide
But this just shows the SCALING FACTOR…the driving force behind theseHigh Performing systems are adherence to good design and implementation practices.
Best results are often achieved through performance analysis and tuning projects that may involve hardware, operating system, network, database, and workstation browser changes, in addition to evolution of the Product(BPM)
The best way to create a high-performance application is to follow good design and implementation practices throughout the development cycle. Paying attention to performance only late in the cycle or in a remedial or emergency situations can be costly and disruptive.
Where does Pega PRPC Stand with its offerings and solution for having a HWAK EYE on the Performance of the System.
Here we go!!!
- The 10 Guadrails of Success
- Adopt an Iterative Approach
- Establish a Robust Foundation
- Do Nothing That is Hard
- Limit Custom Java
- Build for Change ™
- Design Intent-driven Processes
- Create Easy-to-Read Flows
- Monitor Performance Regularly
- Calculate and Edit Declaratively, Not Procedurally
- Keep Security Object-Oriented, Too
- Performance & Monitoring Tools :
- PAL – Performance Analyser
- Preflight – Form Browser Compatibility and Warnings
- DBTrace – DB Related Issues/Errors/Warnings
- PLA – Pega Rules Log Analyser (Analysing Alert and GC Logs)
- AES – Automatic Event Services (it monitors and consolidates alerts from one or more target Process Commander systems, to identify patterns and trends in the generated alerts and to suggest priorities and specific steps for remedial attention)
- Performance Profile Gadget – provides a detailed trace of performance information about the execution of activities, when condition rules, and model rules executed by your requestor session
- SMA – System Management Application – allows the systems administrator to examine logs, monitor system health, and make changes to one or more separate Process Commander servers
- Developer assistance Debugging Tools
- Clipboard – internal memory space dedicated to each requestor where the entered values can be traced via a UI Screen
- Tracer – Debugging a Decision Rule/Activity by running each step one by one.
- Rule warnings : Each time a rule is saved, it is validated. Warnings/Errors appear describing apparent variances from good practices.
- Appropriate rule types : A Heat Map showcasing the different types of Rules associated with the application from a holisting comparision view-point
- Auto-generated : All the Sections and Layouts come with some Autogenerated CSS and HTMLs. Customizing these to fit in our pupose adds to the performance affect of the Browser.
- Generated SQL: Under unusual or special circumstances when we include hand crafted SQLs bypassing the generated SQL provided by the product and not having much clarity on the db schema and indices, then it adds to the performnce Issue
- Alerts : More than 40 built-in alerts and a dedicated logs helps in identifying the individual atomic operations that were costly in terms of elapsed time, quantity of data processed, CPU time, JVM memory, or other dimensions
Happy Learning!! 🙂