Filed under Load Testing, Performance Testing.

Before running a load test for APIs, WebSites, Web or Mobile Applications on, you should check the tips below and understand which HTTP response codes you could receive and what do they mean. This article will help to analyze a load test report or a performance testing report.

General Tips

  • make sure the load tested URL is up and publicly available before running any load or performance tests
  • check the HTTP method (GET is the request done by browsers when accessing a website or an API endpoint)
  • start with a small number of clients (25-50 clients) and check the number of errors
  • try to increase the number of the clients gradually to understand how your API, Website or WebApp behaves
  • check the Time, Latency, Hits, Throughput and Errors for each test to check the performances of your system

If you receive 3xx redirection responses you need to check:

  • if the URL added for the load test is doing any redirects to another URL
  • if you are testing the redirection, try to do the load test on the final URL, after the redirects end
  • remove the “Follow redirects” checkbox next to the URL if you want to avoid redirecting your requests and do the load test on the URL added

If you receive 4xx client errors you need to check:

  •     the URL is correct and publicly available
  •     check the HTTP Method used: GET, POST, PUT or DELETE (GET is the request done by browsers when accessing a website or an API endpoint)
  •     the request headers, cookies, query params or basic HTTP auth params are set correctly

If you receive 5xx server errors you need to check:

  • the response code, usually if the load tested API, WebSite, Web or Mobile Application doesn’t handle the load properly you’ll start receiving 503 Service Unavailable responses.
  • try to reduce the number of clients (concurrent users) for your load test and check the boundaries of your system
If the tested APIs, WebSites, Web or Mobile Applications don’t handle the load properly you’ll start receiving 5xx (server errors) or other connection errors.
According to, here is a list of HTTP status messages that might be returned on running a load test for API Endpoints, WebSites, Web and Mobile Applications:

1xx: Information

100 Continue  The server has received the request headers, and the client should proceed to send the request body
101 Switching Protocols The requester has asked the server to switch protocols
103 Checkpoint Used in the resumable requests proposal to resume aborted PUT or POST requests

2xx: Successful

200 OK The request is OK (this is the standard response for successful HTTP requests)
201 Created The request has been fulfilled, and a new resource is created
202 Accepted The request has been accepted for processing, but the processing has not been completed
203 Non-Authoritative Information The request has been successfully processed, but is returning information that may be from another source
204 No Content The request has been successfully processed, but is not returning any content
205 Reset Content The request has been successfully processed, but is not returning any content, and requires that the requester reset the document view
206 Partial Content The server is delivering only part of the resource due to a range header sent by the client

Other: Connection Error

The URL added for the load test could not be accessed for one of the following reasons:
– the URL is not publicly accessible (we support only URLs which can be accessed publicly via Internet)
– the system crashed for some reason
For the load testing documentation please click here.

Filed under Load Testing, Performance Testing.

We’ve just added a Documentation page to help you have a better understating about LoadFocus and load and performance testing in general.

The Documentation page contains:

  • how to run a new load test 
  • full explanation of all the terms used when creating a new test
  • how to inspect the results of a load test
  • definitions of all the performance metrics used

The list will continue to grow, so please contact us if you feel there is something missing, or something that needs a better explanation.

Please use the “+Need Help” forms on the bottom right of each page to submit suggestions, comments or feedback.

Filed under Apache JMeter.

Install JMeter on Mac, to do so you have multiple options:

1. brew install jmeter

2. Download Apache JMeter from:

  • download the source code and build it yourself
  • download the binaries

In your Finder, uncompress, and go to the directory called ‘bin‘.
– Open ApacheJMeter.jar
– from Terminal, use the following command:

sh ./apache-jmeter-2.11/bin/

Filed under Web Resources.

We wanted to share with everyone how we use Agile methodologies to release new features and get feedback on our release process.
We use 2  weeks sprints, and below are the meetings and details of our process: 
Sprint Planning – 1-2 hours
  • Product Owner presents all the stories for the current sprint (what/why do we have to implement) to all team members (DEV/QA/UX)
  • team asks questions to the Product Owner to clarify all stories
  • team estimates each story based on complexity (risk of the implementation, areas impacted by the change, UX designs, QA manual and automation)
  • each story should have a driver and UX, DEV and QA tasks (see below for a full list of tasks)
  • team commits to a list of stories to deliver in the current sprint
Mid Sprint Demo – 1 hour
  • team meets with product owner to show progress and current implementation
  • gets feedback from product owner on current implementation
  • QA presents status for all stories, confidence and risk
End Sprint Demo – 1 hour
  • team presents to product owner final implementation and gets feedback
  • QA presents status for all stories, confidence and risk
Scenario Testing – 3-4 hours
  • the team tests different user scenarios/workflows (different type of users – edge cases)
Sprint Retrospective – 1 hour
  • Start, Keep, Stop doing
  • what went good/bad in current sprint
  • choose only one thing to improve in the next sprint
  • check during next sprint if the item chosen to improve is going in the right direction
Default Story Tasks (every story should have after planning ends)
  • DEV: (if needed) discuss with other teams potential implications
  • DEV: (if needed) Create documentation
  • DEV: Start implementation based on current documentation and designs
  • DEV: Code review
  • QA: Write test plan (wiki page – choose which tests can and makes sense to be automated)
  • QA: Automation: implement/update tests or test scripts from current test plan – (UI, Performance and Load testing, Security)
  • QA: Manual: test the implementation (after code review) – (UI, Performance and Load testing, Security)
  • DEV: Bug Fixing
  • QA: Bug Regression
Definition of Done (every story is completed after it meets all items in the definition of done)
  • DEV and QA complete
  • Automation
  • i16n/localisation
  • release notes
Please let me know your thoughts, ideas, comments.

Filed under Selenium WebDriver, Test Automation, UI Testing.

Create automated website tests for free with

If you want to take snapshots of the browser while running tests using Selenium Webdriver these are some ways to do it.
Basically the only thing you need to do is use the class below to take snapshots of the urls you want:

package com.webdriver.automation.tests;


import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TestUtils {

private final static Logger log = LoggerFactory.getLogger(TestUtils.class);

public static void takeDriverSnapShot(WebDriver driver, String screenSnapshotName) {
File browserFile = new File(System.getProperty("") + File.separator +screenSnapshotName + ".png");
snapshotBrowser((TakesScreenshot) driver, screenSnapshotName, browserFile);

private static void snapshotBrowser(TakesScreenshot driver, String screenSnapshotName, File browserFile) {
try {

File scrFile = driver.getScreenshotAs(OutputType.FILE);"PNG browser snapshot file name: \"{}\"", browserFile.toURI().toString());

FileUtils.moveFile(scrFile, browserFile);
} catch (Exception e) {
log.error("Could not create browser snapshot: " + screenSnapshotName, e);

public static void main(String[] args){
WebDriver driver = new FirefoxDriver();

TestUtils.takeDriverSnapShot(driver, "google_com");

Have a look at the main method which uses the code for taking a snapshot.

If your using TestNG as an unit testing framework, it’s even easier to take screen shots.
The code below takes screenshots of failed tests and stores in the Surefire reports directory: “/target/surefire-reports/screenShots/”.

@AfterMethod(alwaysRun = true)
public void afterMethod(ITestResult result) throws Exception {
if (!result.isSuccess())
takeScreenShoot(threadDriver, result.getMethod());

// Quit environment.

public void takeScreenShoot(ThreadLocal threadDriver, ITestNGMethod testMethod) throws Exception {
WebDriver augmentedDriver = new Augmenter().augment(threadDriver.get());
File screenshot = ((TakesScreenshot) augmentedDriver).getScreenshotAs(OutputType.FILE);
String nameScreenshot = testMethod.getMethodName();
String path = getPath(nameScreenshot);
FileUtils.copyFile(screenshot, new File(path));
Reporter.log("<a href="file://&quot;" target="_blank">" + this.getFileName(nameScreenshot) + "</a>");

private String getFileName(String nameTest) throws IOException {
DateFormat dateFormat = new SimpleDateFormat("");
Date date = new Date();
return dateFormat.format(date) + "_" + nameTest + ".png";

private String getPath(String nameTest) throws IOException {
File directory = new File(".");
String newFileNamePath = directory.getCanonicalPath() + "/target/surefire-reports/screenShots/" + getFileName(nameTest);
return newFileNamePath;

Use Selenium WebDriver to take screenshots of failed tests to have a better understanding of your failures and debug webdriver tests easier.

Also you can take a look on the previous Selenium WebDriver blog post to locate elements in a page.

Filed under Selenium WebDriver, Test Automation, UI Testing.

Create automated website tests for free with

If you need to run Selenium WebDriver UI tests in a browser that sends the traffic through a proxy, here is a code example of a Selenium test.
In this article we will show how easy it is to configure WebDriver to use a proxy by presenting a small code snippet that uses the FireFox driver and for which we set the HTTP, SSL and FTP proxy:

package com.webdriver.automation.tests;

import java.util.ArrayList;
import java.util.List;

import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.Proxy.ProxyType;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.remote.CapabilityType;
import org.openqa.selenium.remote.DesiredCapabilities;

public class WebdriverUsingProxy {

private List<String> urlList = new ArrayList<String>();

public void setup() {

public void passTraffixThroughProxyTest() {
String httpProxy = "";
String sslProxy = "";
String ftpProxy = "";

DesiredCapabilities capability = new DesiredCapabilities();
addProxyCapabilities(capability, httpProxy, sslProxy, ftpProxy);

for (String url : urlList){
WebDriver driver = new FirefoxDriver(capability);

public static DesiredCapabilities addProxyCapabilities(DesiredCapabilities capability, String httpProxy, String sslProxy,
String ftpProxy) {
Proxy proxy = new Proxy();

capability.setCapability(CapabilityType.PROXY, proxy);
capability.setCapability(CapabilityType.ACCEPT_SSL_CERTS, true);
return capability;

In the code above we set the proxy type to manual and we’ve set the HTTP, SSL and FTP proxy to This is done using the Proxy object and setting the desired capability, pretty easy 🙂

If you want to see a easy and complete way to locate web elements check our last Selenium WebDriver blog post.

Have fun passing traffic through proxy in your tests. Also check Load Focus, our Cloud Load and Performance Application Testing Service. You can stress your app/website/API with thousands of concurrent users from all over the world.

Filed under General.

We think easier and simpler is better. Discover how your website or app behaves under custom conditions: number of concurrent users, regions all around the world, custom number of requests or delay between users.

We selected the features we think suit the needs of every business owner, developer, quality assurance or operations person. Just let us know if you think something is missing. Thanks in advance.

New release includes

  • takes less than 1 minute to configure your load test
  • load test type: per duration and per number of request
  • multiple locations
  • GET, POST, PUT and DELETE HTTP methods supported
  • we support: request query parameters, cookies, HTTP headers, POST body, Basic HTTP Authentication
  • live load test results
  • new UI and workflows

Click here to signup. Enjoy!

load testing live monitoring