6 minutes read

During crucial performance testing session we need to monitor and measure various parameters/metrics to be able to analyze and understand why the application behaves in a certain way under a specific load.

LoadFocus is an all-in-one Cloud Testing Platform for Websites and APIs for Load Testing, Apache JMeter Load Testing, Page Speed Monitoring and API Monitoring!

Start for free No credit card upfront.

Below ones are the most used metrics collected during performance testing sessions.

Vital performance metrics:

  • Processor Usage an amount of time processor spends executing non-idle threads.
  • Memory use – amount of physical memory available to processes on a computer.
  • Disk time – amount of time disk is busy executing a read or write request.
  • Bandwidth – shows the bits per second used by a network interface.
  • Private bytes – number of bytes a process has allocated that can’t be shared amongst other processes. These are used to measure memory leaks and usage.
  • Committed memory – amount of virtual memory used.
  • Memory pages/second – number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
  • Page faults/second – the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
  • CPU interrupts per second – is the avg. number of hardware interrupts a processor is receiving and processing each second.
  • Disk queue length – is the avg. no. of read and write requests queued for the selected disk during a sample interval.
  • Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped.
  • Network bytes total per second – rate which bytes are sent and received on the interface including framing characters.
  • Response time time from when a user enters a request until the first character of the response is received.
  • Latency – the processing time that is needed on the server to process your request + the delay involved for the request to reach to server.
  • Error rate – represent the number of errors compared to the total number of requests that were done during the test.
  • Throughput rate a computer or network receives requests per second.
  • Concurrent Users is the number of virtual users that are active at any given point in time during a performance test cycle.
  • Requests per Second is the number of requests sent to the target server during a performance test cycle, including HTML, stylesheets, JavaScript.
  • Throughput is the number of kilobytes per second transmitted during the performance test cycle, which illustrates the amount of data flowing back and forth.
  • Average Response Time is the roundtrip time that it takes for a request from the client to generate a response from the server.
  • Peak Response Time is the longest response time that occurred within a given performance test cycle.
  • Amount of connection pooling – the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
  • Maximum active sessions – the maximum number of sessions that can be active at once.
  • Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues.
  • Hits per second the no. of hits on a web server during each second of a load test.
  • Rollback segment – the amount of data that can rollback at any point in time.
  • Database locks – locking of tables and databases needs to be monitored and carefully tuned.
  • Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
  • Thread counts – An applications health can be measured by the no. of threads that are running and currently active.
  • Garbage collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.

Introduction

Performance testing is crucial for ensuring that your application runs smoothly and efficiently. Whether you’re a non-technical business owner or a seasoned software engineer, understanding and measuring the right metrics can significantly impact your system’s performance and user experience. This guide will walk you through the essential metrics to monitor during performance testing and how to measure them effectively.

Understanding Performance Testing

What is Performance Testing?

Performance testing is a process used to determine how a system performs under various conditions. It assesses speed, scalability, stability, and reliability by simulating different load scenarios. The primary goal is to identify and address performance bottlenecks before they impact end users.

Why Key Metrics Matter

Metrics are the lifeblood of performance testing. They provide quantifiable data that helps you evaluate how well your system performs and where improvements are needed. By focusing on key metrics, you can make informed decisions that enhance user satisfaction and business outcomes.

Non-Technical Overview of Key Metrics

Response Time

Response time measures the duration between a user’s request and the system’s response. It’s a critical metric because it directly affects user experience. Fast response times lead to happy users, while slow response times can frustrate and drive them away.

Throughput

Throughput refers to the number of transactions processed within a given timeframe. It indicates the system’s capacity to handle a large volume of requests. High throughput is essential for ensuring that your application can serve many users simultaneously without performance degradation.

Error Rate

The error rate measures the percentage of requests that result in errors. High error rates can indicate serious issues with your application, such as bugs or resource limitations. Monitoring error rates helps you maintain the reliability and robustness of your system.

Key Metrics for Technical Audiences

Load Time

Load time is the time it takes for a page or application to load completely. This metric is vital for both user experience and search engine rankings. Tools like Google PageSpeed Insights can help you measure and optimize load time.

const startTime = performance.now();

// Perform some operations

const endTime = performance.now();
const loadTime = endTime - startTime;
console.log(`Load time: ${loadTime} milliseconds`);

Latency

Latency is the delay before a transfer of data begins following an instruction. Lower latency is crucial for applications that require real-time interactions, such as video conferencing or online gaming.

import time

start_time = time.time()

# Simulate data transfer or processing
time.sleep(1)

end_time = time.time()
latency = end_time - start_time
print(f"Latency: {latency} seconds")

Scalability

Scalability measures a system’s ability to handle increased load without compromising performance. It’s essential for applications expected to grow over time. Techniques like load balancing and horizontal scaling are commonly used to enhance scalability.

Advanced Metrics and Their Measurement

Resource Utilization

Resource utilization tracks the usage of CPU, memory, and network resources during testing. This metric helps identify whether your system has sufficient resources or if it’s being overutilized, which can lead to performance issues.

# Example command to monitor CPU and memory usage on Linux
top

Concurrent Users and Sessions

Testing for concurrent users and sessions is crucial for understanding how your application handles multiple users simultaneously. This metric helps ensure that your application can maintain performance under heavy user loads.

Peak Load Metrics

Peak load metrics assess how your system performs under maximum load conditions. This type of testing helps you prepare for high-traffic events, such as product launches or sales promotions.

Tools and Techniques for Measuring Metrics

Overview of Popular Tools

Several tools can help you measure performance metrics effectively:

  • JMeter: An open-source tool for load testing and measuring performance.
  • LoadRunner: A comprehensive tool for testing applications under load.

Step-by-Step Guide to Using JMeter

Here’s a quick walkthrough of using JMeter for performance testing:

  1. Download and Install JMeter: Visit the JMeter website and download the latest version.
  2. Create a Test Plan: Open JMeter and create a new test plan.
  3. Add a Thread Group: Define the number of users, ramp-up time, and loop count.
  4. Add Samplers: Specify the requests to be sent to the server.
  5. Add Listeners: Configure listeners to collect and display test results.
  6. Run the Test: Execute the test plan and analyze the results.

Best Practices for Accurate Measurement

Planning and Designing Tests

Proper planning is critical for accurate performance testing. Define clear objectives and identify the key metrics you want to measure. Design your tests to simulate real-world usage as closely as possible.

Executing Tests and Collecting Data

Follow best practices for test execution to ensure accurate data collection. This includes using realistic data, running tests during off-peak hours, and repeating tests to validate results.

Analyzing Results

Analyzing performance data involves identifying trends, pinpointing bottlenecks, and determining areas for improvement. Use visualization tools and dashboards to make sense of the data and communicate findings effectively.

FAQs on Metrics During Performance Testing

What Are the Metrics Monitored in Performance Testing?

  • Response Time: Measures how long it takes for a system to respond to a request.
  • Throughput: Tracks the number of transactions processed in a given time frame.
  • Error Rate: Calculates the percentage of errors encountered during testing.
  • Resource Utilization: Monitors CPU, memory, and network usage during the tests.

What Are the KPIs for Performance Testing?

  • Response Time: Key indicator of user experience and system efficiency.
  • Throughput: Critical for understanding system capacity and performance under load.
  • Error Rate: Essential for ensuring reliability and robustness of the application.
  • Latency: Important for assessing the delay in communication and data processing.

What Is 90% in Performance Testing?

  • 90th Percentile Response Time: Indicates that 90% of the requests are completed within this time frame, providing insight into the worst-case performance experienced by most users.

What Are the Parameters Measured in Performance Testing?

  • Response Time and Latency: Measure how quickly a system responds to requests.
  • Throughput and Bandwidth: Assess the volume of data and transactions handled.
  • Error Rates and Success Rates: Track the reliability of the system.
  • Resource Utilization: Monitor usage of CPU, memory, and network resources.

What Is KPI and SLA for Performance Testing?

  • KPI (Key Performance Indicator): Metrics that reflect the performance and efficiency of the system, such as response time, throughput, and error rate.
  • SLA (Service Level Agreement): A commitment between service provider and client, defining the expected performance level, such as maximum allowable response time or minimum throughput.

How to Benchmark Performance Testing?

  • Set Clear Objectives: Define what aspects of performance are most critical.
  • Select Appropriate Metrics: Choose relevant metrics like response time, throughput, and resource utilization.
  • Use Standard Tools and Methods: Employ industry-standard tools and consistent testing procedures.
  • Compare Against Industry Standards: Benchmark your results against known standards or competitors to evaluate performance.

Conclusion

Measuring the right key metrics during performance testing is essential for ensuring your application’s success. By understanding and monitoring these metrics, you can optimize your system’s performance, improve user experience, and achieve better business outcomes.


LoadFocus: Your Partner in Performance Testing

LoadFocus is a modern cloud testing platform, a load and stress testing cloud tool which provides the infrastructure to run tests with thousands of concurrent users, from multiple cloud locations, in less than a few minutes, keep history of the results, compare different runs to inspect performance improvements or performance degradation. It also supports running JMeter load tests from the cloud and monitoring and audit web and mobile performance.

Written by Chris R.

How fast is your website? Free Website Speed Test