Filed under JAVA, JUnit.

Why structure your integration tests ?

When the number of integration tests grow is a good practice to separate the tests based on the feature/api that they are testing.

The advantages of doing this are:

  • easier to maitain as the tests for a specific feature/API are grouped in a specific category
  • easier to run the tests only for that specific category when only that feature changes to get faster feedback
  • easier to group the tests when running them in Jenkins/Travis CI or any other CI so that multiple jobs can be run in parallel and also when a job fails imediatelly the team will know which feature/API has failed

How is this done using Java, JUnit and Maven ?

  1. Create a basic Maven project
  2. Create a new package in the src/main/java with the name “tutorial.junit.categories” in which the different categories will be added.
    The structure of the project should look like below:
  3. Create a few JUnit test cases in a few test classes. In our case we are going to create:
    •  LoginApiTestIT.java
    •  SearchApiTestIT.java
  4. Create a few categories in which we want to split the tests. In our case we are going to split them as follows:
    • LoginApiCategory – tests run against the login api
    • SearchApiCategory – tests run against the search api
    • SmokeTestCategory – which includes tests from either Login or Search API tests that we want to run as smoke tests
  5. A category is just an empty interface as in the example below:
  6. Setting the category for each of the tests. For setting the category we just need to add the @Category annotation either at the class or method level as  below.
  7. Example below:
    
    package tutorial.junit;
    
    import org.junit.Test;
     import org.junit.experimental.categories.Category;
    
    import tutorial.junit.categories.LoginApiCategory;
     import tutorial.junit.categories.SmokeTestCategory;
    
    @Category(LoginApiCategory.class)
     public class LoginApiTestIT {
    
    @Category(SmokeTestCategory.class)
     @Test
     public void shouldReturn200ForValidCredentials(){
     System.out.println("Running Login API tests - Positive case");
     }
    
    @Test
     public void shouldReturn404ForInvalidCredentials(){
     System.out.println("Running Login API tests - Negative case");
     }
     }
    
    
  8. Also for running just the integration tests we are going to create a Maven profile in our Maven pom.xml for the tests as below:
    • <profiles>
      
           <!-- The Configuration of the integration-test profile -->
      
           <profile>
      
              <id>integration-test</id>
      
              <build>
      
           <plugins>
      
            <plugin>
      
              <groupId>org.apache.maven.plugins</groupId>
      
              <artifactId>maven-failsafe-plugin</artifactId>
      
              <version>2.18.1</version>
      
              <executions>
      
                <execution>
      
                  <id>integration-test</id>
      
                  <goals>
      
                    <goal>integration-test</goal>
      
                  </goals>
      
                </execution>
      
                <execution>
      
                  <id>verify</id>
      
                  <goals>
      
                    <goal>verify</goal>
      
                  </goals>
      
                </execution>
      
              </executions>
      
            </plugin>
      
           </plugins>
      
         </build>
      
           </profile>
      
      </profiles>

       

  9. Running the tests based on the category is pretty simple using the Maven commands below. Open the Maven pom.xml file location in your terminal and run the following commands:
    • running the login tests
      mvn clean install -P integration-test -Dgroups="tutorial.junit.categories.LoginApiCategory"
    • running the search tests
      mvn clean install -P integration-test -Dgroups="tutorial.junit.categories.SearchApiCategory"
    • running the smoke tests which include tests from login and search
      mvn clean install -P integration-test -Dgroups="tutorial.junit.categories.SmokeTestCategory"

Filed under JAVA.

Here is how to click a link by text with Selenium WebDriver in Java using the built in WebDriver helper methods or by XPath:

Click link by full text using Selenium WebDriver

WebElement linkByText = driver.findElement(By.linkText("My Link"));
linkByText.click();

Click link by partial text using Selenium WebDriver

WebElement linkByPartialText = driver.findElement(By.partialLinkText("First"))
linkByPartialText.click();

Click link by text using XPath in Selenium WebDriver

WebElement linkByTextUsingXPath = driver.findElement(By.xpath("//a[text()='First']"));
linkByTextUsingXPath.click();

Click link by partial text using XPath in Selenium WebDriver

WebElement linkByPartialTextUsingXPath = driver.findElement(By.xpath("//a[contains(text(),'ABC')]"));
linkByPartialTextUsingXPath.click();

More details here on how to locate elements with Selenium WebDriver.

 

LoadFocus is a cloud testing platform for:

Filed under NodeJS, Screenshot Testing.

What is Puppeteer? Puppeteer is Node library that you can use in order to control Headless Chrome with the DevTools Protocol.

The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile for Chromium and Chrome browsers.

Puppeteer – Headless Chrome Node API works only with Chrome and uses the latest versions of Chromium.

Chromium is an open-source browser project that forms the basis for the Chrome web browser. One of the biggest differences between the two browsers is that, while Chrome is based on Chromium, Google adds some of proprietary features to Chrome, features like automatic updates and support for additional video formats. Other features like usage-tracking or “user metrics” feature can be found only in Chrome browser.

Note: Puppeteer requires at least Node v6.4.0but the examples below use async/await which is only supported in Node v7.6.0 or greater.

Node.js has a simple module loading system. In Node.js, files and modules are in one-to-one correspondence (each file is treated as a separate module).

You can use Visual Regression Testing to take website screenshots and compare the generated images and identify differences pixel by pixel, a comparison image will be shown next to the result’s screenshot that highlights the differences in red.

visual regression testing tool

 

Install Puppeteer

Here is how to install puppeteer from NPM Modules Registry (npm is the package manager for JavaScript):

 npm i puppeteer 

Below are code snippets on how to use Puppeteer – Headless Chrome Node API in order to take screenshots of your website.

Example – navigating to https://example.com and saving a screenshot as a PNG file named example.png:

Generate screenshots with Puppetteer

 const puppeteer = require('puppeteer'); (async () =&amp;amp;amp;amp;amp;gt; { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://example.com'); await page.screenshot({path: 'example.png'}); await browser.close(); })(); 

await page.goto('https://example.com'), {
timeout: 120000,
slowMo: 2000,
waitUntil: ['load', 'networkidle'],
networkIdleTimeout: 5000,
});

By default, Puppeteer take screenshots of the viewport (the area which is visible by default when you open a website).

In order to take a screenshot of the full web page, you need to add the fullPage parameter to the screenshot method:


await page.screenshot({ path: 'example.png', fullPage: true });

Here is an example on how to take a screenshot of a webpage with Puppeteer using a customer web page size.
You just need to pass the width and height of the viewport in order for the browser to resize the web page to the desired size.


export async function takeScreenshot(page, filename, width, height) {
await page.setViewport({ width, height });

await page.screenshot({ path: `${filename}-${width}-${height}.png`, fullPage: true });
}

Here is how to call the above custom screenshot method:

await takeScreenshot(page, 'example.png', 320, 480);

Puppeteer, also provides a list of Mobile Devices as a list of objects called DeviceDescriptors.
In order to emulate a web page in a mobile emulator, with specific characteristics, you can import the
pre-defined list of mobile emulators from puppeteer/DeviceDescriptors

 const devices from 'puppeteer/DeviceDescriptors'; 

Puppeteer's API is very similar to Selenium WebDriver, but works only with Google Chrome, while WebDriver work with most popular browsers.

More details on how to locate elements to use in order to interact with Puppeteer or Selenium WebDriver.

Debugging and Troubleshooting Puppeteer

1. Non Headless Mode - for debugging purposes, sometimes it's useful to see what the browser is displaying. Instead of launching in headless mode, launch a full version of Chrome using headless: false when you launch the browser using Puppeteer:


const browser = await puppeteer.launch({headless: false});

2. Slow down screenshot generation - the slowMo option slows down Puppeteer operations by the specified amount of milliseconds. It's another way to understand better what's happening with the code you've written and debug easier.


const browser = await puppeteer.launch({
headless: false,
slowMo: 250 // slow down by 250ms
});

3. Capture browser's console output


page.on('console', msg =&amp;amp;amp;amp;amp;gt; console.log('PAGE LOG:', ...msg.args));

await page.evaluate(() =&amp;amp;amp;amp;amp;gt; console.log(`url is ${location.href}`));

4. Enable verbose logging - All public API calls and internal protocol traffic will be logged via the debug module under the puppeteer namespace.

# Basic verbose logging

env DEBUG="puppeteer:*" node script.js

You can use Visual Regression Testing to take website screenshots and compare the generated images and identify differences pixel by pixel, a comparison image will be shown next to the result’s screenshot that highlights the differences in red.

LoadFocus is a cloud testing platform for:

Filed under NodeJS.

Here are the steps you need to follow in order to debug ES6 code in the WebStorm IDE. After this, you’ll be able to take advantage of all debugging advantages, like setting breakpoints, moving away from console logs and faster understanding of the code of your application.

Prerequisites for Debugging ES6 in WebStorm IDE

 

1.Add the following NPM modules:

babel-core
babel-preset-es2015

3. Add the following devDependencies:

"devDependencies": {
    "gulp": "^3.9.1",
    "gulp-babel": "^6.1.2",
    "gulp-sourcemaps": "^2.4.1"
  }
 

2. Create a new gulp file, you can call it: gulp.babel.js, and paste the below code to this file:

import gulp from 'gulp';
import sourceMaps from 'gulp-sourcemaps';
import babel from 'gulp-babel';
import path from 'path';

const paths = {
  es6: ['./src/**/*.js', './test/**/*.js'],
  es5: './dist',
  
  sourceRoot: path.join(__dirname, 'src'),
};

gulp.task('babel', () => gulp.src(paths.es6)
    .pipe(sourceMaps.init())
    .pipe(babel({
      presets: ['es2015'],
    }))
    .pipe(sourceMaps.write('.', { sourceRoot: paths.sourceRoot }))
    .pipe(gulp.dest(paths.es5)));

gulp.task('watch', ['babel'], () => {
  gulp.watch(paths.es6, ['babel']);
});

gulp.task('default', ['watch']);

 

4. Install NPM modules
It is a good idea to install gulp globally:

 npm install gulp -g
 

Install the npm modules, by running npm install in the root of your application.

5. Run gulp from the root folder of your application, which will create the dist folder with the transpiled scripts + sourcemaps used for debugging ES6 code

Debug Mocha Unit Tests with WebStorm

  • in order to debug/add breakpoint to the Mocha tests, you need to do the following:
  • add `–compilers js:babel-core/register` the `WebStorm Mocha Configuration -> Extra Mocha options` field
  • start debugging the tests by adding breakpoints to the unit tests and while running in the Debug mode

Filed under JAVA, TestNG.

We are going to show how to use the DataProvider in your test cases created with the TestNG unit testing framework.
DataProvider are used in order to create data-driven tests. Basically, it will help you to run the same test case, but with different data sets.

Examples of DataProviders

We are going to use the two dimensional object array Object[][] in order to return data and make use of it in the test case.

In order to create a DataProvider, you need to:
– create a new method with the two dimensional object array Object[][] as a return type
– add the DataProvider annotation to the method
– give a name for the DataProvider
– return a new two dimensional object array Object[][] with the desired values for the test

Here are some examples of values that can be provided for the DataProvider in TestNG:

@DataProvider(name = "provideDaysInterval")
public Object[][] provideData() {
return new Object[][]{{1}, {2}, {28}, {110}, {365}, {400}, {800}};
}

@DataProvider(name = "invalidIds")
public Object[][] provideInvalidIds() {
return new Object[][]{{"a"}, {"asdasdasf"}, {"£!@$%^&amp;*^(&amp;*&amp;^%£$@£!"}, {"1"}, {"2332423"}, {"123456786543sadfgh"}, {"1234567890"}};
}

@DataProvider(name = "minMaxDates")
public Object[][] provideMinMaxDateRanges() {
return new Object[][]{
{"2013-01-04", "2014-01-04", "2014-04-04", "2015-07-04"},
{"2013-01-04", "2013-04-04", "2014-04-04", "2014-07-04"}
};
}

Now, you can make use of these DataProviders in your test cases by following the below steps:

– add the dataProvider attribute to the @Test annotation, and specify which is the dataProvider that you want to use. Make sure the data types defined in the two dimensional object array Object[][] are reflected in your test method attributes, see more details about the implementation of the DataProvider in TestNG:

@Test(groups = {"smoke"}, dataProvider = "provideDaysInterval")
public void test_Days_Are_Valid(int numberOfDaysInterval){

}

@Test(groups = {"smoke"}, dataProvider = "minMaxDates")
public void test_Data_Ranges_Validate_Min_Max(String startDateFirst, String endDateFirst, String startDateLast, String endDateLast){

}

This is the way to created automated data-driven test cases with TestNG and DataProviders in Java.

 

We found the use of DataProviders very useful, especially for API Testing and UI Testing with Selenium WebDriver. More details on how to find web elements in Selenium WebDriver can be found here.

 

Click here for Online Tutorial for using Java and TestNG for testing with Selenium WebDriver.

Filed under Native Device Testing.

Using the Overview, Home and Back native buttons is pretty straight forward using WebDriver and Appium. Below there are the code examples for all the 3 buttons:

 

How to click the Back Button on Android with Selenium WebDriver and Appium

    public void clickBackButton(){
		((AndroidDriver)driver).pressKeyCode(AndroidKeyCode.BACK);
	}

How to click the Overview Button on Android with Selenium WebDriver and Appium

	public void clickOverviewButton(){
		((AndroidDriver<WebElement>)driver).pressKeyCode(AndroidKeyCode.KEYCODE_APP_SWITCH);
	}

How to click the Home Button on Android with Selenium WebDriver and Appium

	
	public void clickHomeButton(){
		((AndroidDriver<WebElement>)driver).pressKeyCode(AndroidKeyCode.HOME);
	}

 


LoadFocus.com is a cloud testing platform:

Filed under Apache JMeter Tutorials.

In order to define your own variables and reuse them in your tests, it’s easier to use the User Defined Variables from JMeter.

Here is how to create a variable and use it in an HTTP Request from your JMeter Test Plan.

 

Steps

1. Open JMeter (here is a more detailed post on how to install JMeter) and Add a Thread Group to your Test Plan

2. Add a HTTP Request Sampler to your Thread Group

3. Right-Click the Thread Group and add User Defined Variables Config Element in your JMeter test
1-add-user-defined-varibles-jmeter

 

4. Create a new variable: var1 with value www.example.com
2-add-variable-in-user-defined-variables-jmeter

 

5. Go to the HTTP Request and add the variable name where you want to be replaced with its value, use ${var1}
3-user-variable-in-http-request-jmeter

 

6. Add a View Results Tree Listener in order to easily see the results of your request.
4-view-results-jmeter

 

7. Make the request and you can see that the ${var1} was replaced with www.example.com in the HTTP Request
5-inspect-results-jmeter

 

8. Add the ${var1} also in the name of the HTTP Request sampler and you can see the request has the value of the user defined variable var1
6-use-user-defined-variable-second-time-jmeter
7-inspect-http-requests-jmeter

 

Notes:

  • suggestion: for simplicity use User Defined Variables only at the beginning of a Thread Group
  • all User Defined Variables from a test plan are processed at the beginning no matter where they are added or placed in the JMeter Test Plan
  • JMeter User Defined Variables should not be used with functions that generate different results each time they are called
  • use User Parameters for defining variables during a test run instead of User Defined Variables
  • User Defined Variables are processed in the order they are added in the test plan, from TOP to BOTTOM
  • If, in your Test Plan,you have more than one Thread Group, use different names for different values, as UDVs are shared between Thread Groups.
  • You can reference variables defined in earlier UDVs or on the Test Plan.