Choosing an Automation Strategy

Initial stages of test automation strategy/designing an automation framework are not only the hardest but also affect you significantly in a long-term, therefore it’s crucial to get it right from the first time, otherwise, a lot of unnecessary time and effort will be spent on refactoring and improving frameworks.

As someone who built 30+ automation frameworks for a variety of well-known investment banks, I try to follow certain rules and practices to make the best return of investment from my frameworks. I’ll list a couple of things which I believe are necessary to consider for any automation strategy:

  1. BDD/Cucumber integration tests provide the best ROI and are easiest to maintain and expand.
  2. Your tests should hit as many scenarios in the shortest amount of time.
  3. Having one complicated and sophisticated integration test doesn’t add as much value as hundreds of lightweight BBD/Cucumber tests (usually takes less effort either). These lightweight tests provide an ability to hit multiple scenarios for one feature and this is where and how you usually find bugs.
  4. Test maintenance is very important. There is more value in having 10 stable tests than 100 flaky ones.
  5. You should be able to run your test suite on any environment, it’s not good to rely on real data unless you are the owner of the data. Build a separate test suite to test the data quality.
  6. Fix/Remove flaky tests, there’s no point of having them. You’ll lose trust in your tests from developers/teammates if you’ll keep them in your suite.
  7. No mocking! Leave mocking to developers, the role of a software engineer in test is to provide additional coverage.
  8. Leave unit testing to developers!
  9. If you are testing several different applications, create different projects/modules for each of them and move shared code to a common location.

The goal is to have a high amount of very lightweight tests, these tests shouldn’t take a lot of time to run (<20min) and they shouldn’t depend on data/environments.

If you follow these practices your test will provide the highest return on investment, will be easy to maintain, run and expand.

Creating Simple REST Test Automation Framework

name-transparent

In this blog post, I will show how to quickly set up a testing automation framework for REST API testing, using JAVA.

TOOLS

I will use a few of my favourite tools for this simple framework:

Example of a simple REST test using Serenity/RestAssured/JUnit :

@RunWith(SerenityRunner.class)
public class SimpleRestFramework {

    private String url = "https://jsonplaceholder.typicode.com/posts/";

    @Test
    @Title("Simple Serenity RestAssured Test")
    public void simpleRestGetTest(){

        RestAssured.when().get(url)
                .then().statusCode(200);

    }
}

After you run this test case using JUnit, a report will be generated by serenity, the only thing that is left is to run this Maven command which will generate HTML 5 report summary (index.html):

mvn serenity:aggregate

This will generate a Serenity report which can be found in Target > Site > Serenity. This report contains a lot of useful information like request/response and steps that were executed.

Using this information you can start building a basic and simple REST automation framework with a great test runner and report building tool. This is basically all you need for REST testing using JAVA. Goodluck!

How Automation Fits in the SDLC?

Testing automation has been around for a while but it’s still not always clear how it should work. Just like DevOps, it’s a relatively new role which should improve SDLC process, however, it’s not an easy task to efficiently and effectively implement both roles into traditional software development life cycle. In this blog post, I will share my experience, opinion, and views on how to make the most out of test automation role.

I have been working in this field for the last 5 years. Even though, the experience itself is not that impressive I had a chance to try myself out in a variety of different fields and companies such as an investment bank, smaller software house, information and data company, retail banks. All of these companies were in different sizes, ranging from 50 to 100k+ employees. On top of that, I have attended a large number of extensive interviews most of the time reaching final stages in companies such as Goldman Sachs, Royal Bank of Canada, Barclays Capital, UBS, Bloomberg and some larger hedge funds. And after talking with so many people from so many companies and combining my own personal experience I started to realize that there is no standard of how test automation should fit into SDLC. I have combined all of my different experiences and conversations about the teams and testing process during interviews and I have decided to write down common practices and my views.

Waterfall Methodology

Majority of the banks still use waterfall methodology, most of the time it has some agile features like scrum calls, Jira boards, and sprint demos but it’s still essentially a waterfall because the teams in SDLC are separated. In this scenario, BA’s and business pass requirements to developers and they pass implemented features to QA team. Quality assurance, in this case, is a separate team which works independently from developers. In this set up it’s much harder to have effective automation team because automation as DevOps is created to make the process more agile, flexible and quicker, however, boundaries between developers and automation team doesn’t allow it to be used to full potential.  But these things happen and to make the best out of this scenario I believe these steps should be taken into account:

  • Developers concentrate on unit tests for the features that they implement.
  • QA concentrates on BDD / isolated component / integration test.
  • BDD test should be based on requirements from BA’s, business and developer unit tests.
  • Because this methodology is not very agile and flexible isolated component testing and end to end integration tests carry much more weight and value than usual.
  • BDD tests lose a little bit of value in this methodology because quick and stable test results (strengths of BBD) are not important in the waterfall.
  • Essentially in this setup automation developer becomes a very technical BA who can check the quality of the product in the very late stages of SDLC.
  • Because QA is a separate team communication with the developers is a must, they should always know QA automated tests coverage.

SCRUM

This is I believe where test automation shines the most. Working alongside developers with lots of communication and idea sharing creates a very efficient environment and utilizes automation the most. Having a dedicated person who works on improving unit tests, implementing BDD and integration tests for every feature and all of that being plugged into developers CI cycle is great. Here are some thoughts:

  • Unit tests should be reviewed by automation tester.
  • Automation tester should be allowed to improve and implement unit tests.
  • BDD has a lot of value in SCRUM; quick, responsive and stable test suites can be easily and effectively utilized in CI.
  • Good unit and BDD test utilization in CI cycle mean that there is less need for clumsy and flaky end to end, black box integration tests.
  • Fewer integration tests mean that there can be more focus on performance tests and continuous integration efficiency.
  • QA play a very important role in continuous integration/deployment cycle.
  • 3 devs /1 automation developer / 1 DevOps, sounds like a great setup.

What to automate?

The scope of automation is so large that it’s very important to know where to start and what to target. From my own personal experience business quite often wants to see two things targeted by automation:

  • Regression automation.
  • Finding bugs using automation.

Both of these tasks are pretty much mutually exclusive which makes finding a common ground very tough. You either go for time-saving and target regression automation which usually is a very massive and resource exhausting task or you target new features with lower level tests and try to find bugs.

Luckily there is a solution for these kinds of situations and it’s called a testing pyramid.

Screen Shot 2018-03-06 at 13.36.50.png

 

Following pyramid approach will maximize return on investment and utilize test automation the most. This is my own personal approach:

  • Unit tests – implemented by developers and should include negative tests. (very high ROI)
  • Unit integration tests – implemented by developers and should include negative tests. (very high ROI)
  • Acceptance tests – BDD type tests which are based on business analyst acceptance requirements. Implemented by automation developer. (high ROI)
  • Component based integration tests – BDD can be also used for these kinds of tests, should be implemented by test automation developer. (medium ROI)
  • System / UI Selenium / End to End black box – should be implemented by test automation developer.  (low ROI)

Designing Performance Tests

For many of us, who are not working as performance test engineers, performance testing is an uncharted territory. And it’s a common issue because there are so many performance test types and even more ways to measure them. Depending on the application that you want to test the scope might be quite large and sometimes too large to even cover basic performance testing needs. Also, it’s very hard to know where to start.

In addition, reporting is also an issue, even if you have the right method or tool, it might be hard to integrate that with CI (continues integration) and compare your performance test runs with previous data to measure changes.

Performance testing types:

After going through many different types of performance tests I compiled a simple list for myself which I use to determine and implement a basic performance test coverage for the application/software. I know that there a few more test types, but I believe those are very specific, made for niche software and complicates coverage definition quite a bit. These test type can be applied to most of the software/applications and provide a basic test coverage which can be used as a performance threshold for later testing.

Performance test To determine or validate speed, scalability, and/or stability
Load test To verify application behaviour under normal and peak load conditions
Stress test To determine or validate an application’s behaviour when it is pushed beyond normal or peak load conditions.
Throughput test To determine how many users and/or transactions a given system will support and still meet performance goals.

Reporting/Tools:

Next step in this process is to select tools that you’ll be using to run and report. This is just an example and can be extended by adding more tools which suite your application.

Splunk
  • Could be used very detail report generation
  • Performance measuring, calculations and reporting are being done on Splunk side
  • Pretty much real-time reporting
  • Splunk keeps 30 days of logs
JMeter
  • Used to measure SOAP and DB performance
  • Easy to integrate with Jenkins
  • Good reports
Jenkins
  • Test NG Pass or Fail test

Scope:

Now when we have test types and tools ready we should generate out performance testing scope. Do we only want to check application performance or we want to dig more and also check DB/Services?

Application Testing application level system performance
SOAP services Testing performance of SOAP services
Database Testing performance of DB connection

Performance Test Table:

At this point, there is only one table left to finish off the process and it depends on your application/website. You just write down the component/application/website name that you want to test and add the data from the tables above. It’s a pretty straightforward process and as soon as you begin, more ideas will pop up and you will be able to cover the majority of your application.

Example:

Test Name
Type
Scope
Component
Tool
 Notes
Uploading Files on website Load test Application Uploading Jenkins   With this test we will check how normal/peak load affect the application
Uploading Files on website Stress test Application Uploading Jenkins  This test will push the application beyond supported levels (Uploading 100k files)
Login into the system Performance test Application Uploading Jenkins

All in all…

I personally found this to be a great way to determine the scope of performance testing, in addition, it’s easy to demonstrate your intentions and thought the process to colleagues/management/business which makes things easier for everyone.

All in all, it’s quite hard and confusing to efficiently define a basic performance testing scope and it’s hard to know where to begin, this method worked for me quite well, allowed me to create a large number of tests and then consolidate them into one basic performance test suite which was a great threshold for the future runs. And as I mentioned it’s quite easy to share/demonstrate the last table to everyone who is interested to get as much feedback and suggestions as possible.

Integration Testing with nodeJS

logo

Recently more and more front-end tools started to move towards Node.js. Even though the JavaScript community and the number of available tools are expanding rapidly there isn’t that much choice in regard to integration testing. However, a move to NodeJS brought us a few very interesting and promising front-end integration testing tools.

Selenium historically was mostly an objective language-based tool. It was mostly used with Java and quite popular with C. Even though there are many Java-based Selenium wrappers and libraries, there were not that many for Javascript until just recently. I have decided to assess these Javascript integration testing tools, compare the performance, usability, efficiency and stability with Java equivalents and in this blog, I will go through my findings.

Both NightwatchJS and Webdriverio are very similar, both are very quick, easy to configure simplified Selenium wrappers; they also serve the same purpose, thus they have similar functionality. After trying both of the tools it seems that the developers are pushing towards the idea of simple, small, quick and efficient integration testing. It feels like they are telling us that massive, clumsy and inefficient Java Selenium tests are the past and they are changing the direction of integration testing. I have to agree, I like that and it makes a lot of sense, however by moving in the direction of lightweight integration testing you lose some of the important bits that clumsy Java tests have (Abstract page factory, proper page object pattern, parallel running, multithreading etc..).

So what are these tools? What are the main features? Both of them are very easy to configure, easy to run and easy to get a very quick feedback. In addition to that, it significantly simplifies the process of writing automated tests.

So how do these tools stack up against Selenium and other Java Selenium wrappers?

Configuration

There is a massive difference in terms of setup time. Selenium, or any Selenium Java wrapper, requires a significant amount of time and effort to set up. Node.js tools, however, take about 5 minutes to set up. This is a great achievement, you can set up your projects testing framework in minutes. In addition, it fits well with their idea of lightweight testing.

Test Runners

Selenium Java wrappers and libraries have two main test runners – JUnit and TestNG, both have their strengths and weaknesses, however, both of them are very slow in comparison with the popular Javascript tools. Before comparing the speeds we have to take into account that the Nodejs tools introduces an additional step in communication between the tool and Selenium Server, which does take time since every call has to go through the additional HTTP request. Even though, it has an additional communication step, after conducting the same test on several test runners I came to the conclusion that Mocha integration tests run ~25% faster than JUnit or TestNG equivalents. This is a huge difference taking into account the additional communication step.

Speed test results
Speed test results.

Test Writing

This is one of the main strengths of Nodejs testing tools. They simplified test writing to the level that the test can be written in significantly less time and requires less technical knowledge from tester. Tests are much easier to read and maintain too, which makes it possible to write a test and scrap it without any bad feeling that you are wasting time. However, some of the decisions were questionable to me, since the framework introduces callback based test writing which makes it hard to edit or chain more advanced scenarios.

JavaScript code example (keep in mind that this code doesn’t need to define @BeforTest and @AfterTest, so that’s basically all you need for the test):

'Enter text and assert that it was submited' : function (browser) {
   browser
      .url('http://localhost:8080/')
      .waitForElementVisible('body', 1000)
      .setValue('body > section > div > header > input', ['nightwatch', browser.Keys.ENTER])
      .waitForElementVisible("body > section > label",1000)
      .getText("body > section > div > section > label", function(result) {
          this.assert.equal(result.value, "nightwatchX");
      })
}

Java code still needs to include optional but important @BeforeTest and @AfterTest code.  In addition, I have skipped the sleep timeouts which are included in Nightwatch methods, that would make Java code even bulkier.

  @Test
  public void testSpeed2() {
    driver.get("http://localhost:8080/");
    driver.findElement(By.cssSelector("body")).isDisplayed();

    driver.findElement(By.cssSelector("body > section > div > header > input")).sendKeys("nightwatch");
    driver.findElement(By.cssSelector("body > section > div > header > input")).sendKeys(Keys.RETURN);

    driver.findElement(By.cssSelector("body > section > label"));
    String text = driver.findElement(By.cssSelector("body > section > label")).getText();

    Assert.assertTrue("Assertion test", text.equals("nightwatch"));

  }

Reporting

This is one area where NodeJS tools have a clear drawback. Since the whole testing is based on response time and efficiency, there is, therefore, no place for sophisticated and rich HTML5 reports. Nodejs tools are limited to the console output and which yields limited feedback. There are a few plugins, which improves that, however, those tools are nowhere near ready to replace Java equivalents.

NightwatchJS simple report
NightwatchJS simple report

Communication Problems

Both Node.js tools introduce an additional step in communication with Selenium-server. However, both of the tools are far from mature. HTTP protocol has no retry, timeout or content length type in GET and DELETE requests. So in case of queuing or parallel running both of the tools are going to break very easily.

All in all, Nodejs integration testing tools are bringing a new shift in the field; moving integration testing from bulky, clumsy and flaky to a more lightweight, efficient and easy to maintain approach. Unfortunately, these tools are still not mature enough to use in a corporate environment, since both of them lack stability and some of the important features. In addition, the shift towards lightweight integration testing still hasn’t caught that much traction so it is not something that everyone is looking for currently. Hopefully, with time, the tools will become more and more popular as Javascript grows in popularity, which will make for a bright future for Nodejs integration testing tools.

Microsoft Edge Automation

In this blog post, I will try to go through the main issues that I faced trying to automate Microsoft Edge testing.

Microsoft Edge is a universal Windows application which targets device families instead of operating system, because of that application can easily run on different Windows devices and it doesn’t rely on OS as much as on the previous version of Windows. This is advantageous if you are a user who has several windows devices, on the hand, it makes it difficult to automate browser testing.

Since WebDriver is an emerging W3C standard, Microsoft obviously had to release the Selenium web driver version for Edge browsers. Microsoft developers admitted that they completely forgot about web driver, so it was a “last second” release. As things stand we will have to wait for a future release to have to have the final version of web driver, which will inevitably lead to dependency issues, since temporary versions will need to be updated.

IE Performance issues and Microsoft Edge

Microsoft Edge inherited performance issues from Internet Explorer. According to Microsoft,  the reason why the current IE driver is slow is that of an incompatibility with the computer architecture. If you are using the 32-bit Selenium driver against the machine which is 64 bit and you are launching a 64 bit IE browser you’ll see the performance issues. However even if you have everything perfectly set up, IE browser and the web drive will still have some performance issues. Microsoft edge is not that much different, it has a similar performance issue. Once again, we will need to wait for future releases before we see performance improvements.

Universal Application Cons

Since Microsoft is trying to make Edge a universal application, it still (as an older version) doesn’t have an inbuilt user profiling system, which is quite complicated, since web driver relies on this feature a lot. In order to configure the browser currently you can’t just set up a user profile as you are able to do with Chrome or Firefox, most of the settings are unreachable using IE Edge which is a huge disadvantage. No control over cookies, certificate warnings, pop out windows and other settings make it more complicated than it should be. The reason why is it like that right now is because Microsoft Edge is using Windows profile rather than the online browser profile. However, this feature may come in the future.

Problems with launch

Since browser automation doesn’t only end with Selenium web driver, I had to try some other tools like BRJS (which allows browsers to be configured and launched for testing). Universal Windows application requires a certain configuration of the device to be passed in order to be launched. That means that you can’t just simply launch the browser by double-clicking an executable file. It’s definitely not the end of the world since you can force the launch with a shell script so a quick cmd file can solve the problem, in addition, it’s also possible to create your own executable which launches the browser using the same shell command or invokes the process. The real issues arise when trying to terminate the browser after the execution of the test. Since you are not launching the script directly by executing a shell script, you have no control over it after the launch. Writing your own executable could help since the same executable would have the ability to kill the process after the execution, but this is a whole new topic. So in addition to the issues that Microsoft Edge carried from Internet Explorer, we now need to add launching and termination problems.

All in all, as things stand, Edge browser makes things very tricky for testing compared to Internet Explorer and other browsers. It continues to have the same performance and profiling issues that IE had. In addition, incoming Edge versions will only be compatible with the specific Edge driver, which will cause dependency issues. Also, let’s not forget the difficulty launching and killing the browser. Microsoft Edge development team promised to counter some of the issues so let’s hope that it will be better in the future.

Auto Generating Release Notes Using REST Java API

Recently I was exploring a possibility to automate release note process. It’s quite a boring and annoying task to do for either, developers and QA’s (or even tech leads). I came up with two different approaches and I think that both of them might be used beneficially. However, this is no way a full application which completely replaces RELEASE notes, just my thought process and a few quick hacks in order to test stuff out.

Used technologies:

  • Apache Maven 3.3.3
  • Jira-rest-java-client-api 2.0.0-m25

Firstly, I was thinking about an actual generator. The idea was very simple, use REST Api to get a connection with the JIRA board, select the project, fix version and get a list of tickets. Because we are relying on fix version, the list will only contain those tickets which are relevant. Go through the list and get ticket number and description (need to use a good regex). Finally, format and export everything into .txt file.

This option has a few weaknesses, basically you always want your release notes to be official and brief, while most of the time Jira description looks rather differently, usually, it’s messy and long. However, you only need to change the way how your regex works, put your release note comment between # tags and only export what is between them. This way we can have good from both of the worlds; brief, long and messy descriptions and official release notes inside the hash tags. Auto generator easily picks up the regex and makes your life easier.

private static URI jiraServerUri = URI.create("https://jira.com");
public GetIssues(String string) throws Exception {

    final AsynchronousJiraRestClientFactory factory = new AsynchronousJiraRestClientFactory();
    final JiraRestClient restClient = factory.createWithBasicHttpAuthentication(jiraServerUri, "username", "password");

    try {
        final int buildNumber = restClient.getMetadataClient().getServerInfo().claim().getBuildNumber();
        PrintWriter writer = new PrintWriter("pathtoyourreleasefile\RELEASENOTE.txt", "UTF-8");
        String pattern = "YOUR REGEX";
        Pattern r = Pattern.compile(pattern);

        // let's now print all issues matching a JQL string (here: all assigned issues)
        if (buildNumber >= ServerVersionConstants.BN_JIRA_4_3) {
            final SearchResult searchResult = restClient.getSearchClient().searchJql("project=MFXMOTIF AND fixVersion="+string+"").claim();
            int i = 0;

            writer.println("Version: "+string);
            writer.println("===========================\n");

            for (BasicIssue issue : searchResult.getIssues()) {

                String line = issue.toString();
                Matcher m = r.matcher(line);
                while(m.find()) {
                    writer.println(issue.getKey() + " : "+m.group());
                    System.out.println((issue.getKey() ));
                    i++;
                }
            }
            System.out.println(i);
            writer.close();
        }

    }
    finally {
        restClient.close();
     }
}

Release Note CI Checker

The other, more viable, option is to make a Release note checker, which gets the list from Release note file and compares it to the list which is on Jira. The output shows what are you including/excluding from the list.

I found that very beneficial and my team and manager liked the idea a lot!
Every time there was a commit it would trigger the Jenkins build which would trigger my script. This script would grab all of the current issues from Jira done column, get the key(Jira ID). Then it would download the current release note file of the current CI commit build and go through the file searching for those Jira keys that it found on the board. Finally, it would generate an HTML 5 report and upload it to the CI build. This way anyone could easily see what issues are missing or are included in the current build. So you would be always up to date. No more missed release notes!