Test Automation Beyond Test Cases

The traditional and most common way of looking into “automation” applied to the software testing field is the automation of suites of test cases to ascertain the expected working of a software application. The goal of having such suites of automated checks and running them over and over again is to provide quick and continuous feedback on the state of the application under test.

These are created and used mostly to validate that certain known selected features of the application have not regressed from their previous states. There are a lot of closed source and open source tools available in the industry to facilitate this test case creation, generation, maintenance, and management activity.

Oftentimes, we simply refer to them as “Automation Tools” and to the process of executing/running those verifiers as “Automation Execution”. In the testing sphere, the problem that has emerged now is that the term “Automation” has become synonymous only to test case creation and execution activity.

As a consequence, we are not thinking about the applicability potential of automation in general for other areas in testing. We are not exploring other ways in which automation can help us in our testing activities. Like performing testing-related repetitive, boring, and mundane auxiliary tasks quickly, efficiently, and precisely.

In this article, I will present some simple automation examples which will make you think about automation at a broader level. For the examples, I have used shell (bash) scripting.

Auto-generate system information

If you are dealing with hundreds or thousands of tests, it is almost always preferred to run them in parallel on multiple machines (real or virtual) rather than sequentially on a single machine. This will not only speed up the execution time, shorten the test cycle but will also make it possible to run the tests on different system configurations and combinations.

You may want to auto-generate and retrieve system-level information from those machines periodically to understand their behavior before, during, or after the tests are run. Below is a simple automation script that you can schedule and run at certain intervals to gather such kind of data:

#!/bin/bash

# Description: This script will retrieve system-level information from the machine

echo "************ Start of Script ************"
echo
host=`hostname`
echo "Machine is: $host"
echo
df –h
echo
top | head -5
echo
free –m
echo
uptime
echo
iostat
echo "************ End of Script ************"

Here the “hostname” command will output the machine’s hostname, the “df –h” command will give the machine’s file system disk space statistics in bytes, megabytes, and gigabytes. The “top | head -5” command will show the first 5 lines of information related to tasks, memory usage, CPU usage, and swap space usage, and the “free –m” command will display RAM information.

The “uptime” command will display how long the machine has been running together with the current time, number of users with running sessions, and the system load averages for the past 1, 5, and 15 minutes. Lastly, the “iostat” command will provide the CPU and I/O usage statistics. All this information will help you to diagnose any system-related issue that might be interfering with the execution.

Parse and pull information from execution logs

Whenever execution happens, any tool or framework driving the execution usually generates logs and saves them in log files. Logs are nothing but a collection of useful and relevant information captured during the execution flow. This information can comprise anything – errors/exceptions, steps, warnings, alert messages, debug information, general informative statements, execution status, timestamps, etc.

Most of the commercial tools provide built-in capabilities to generate the execution logs, and if you are using a framework that’s based on open-source libraries, you probably have imported and configured logging libraries to do this activity for you. Too often, these logs become quite lengthy and verbose due to a large number of tests being executed or too many logging statements being used.

Logs are key components in test automation, and sometimes it becomes important to extract useful information from these lengthy logs repeatedly and quickly. You would not want your team to manually go through these big log files and spend an enormous amount of time finding specific information.

This is a perfect candidate for automation where you can write an automation script to parse and pull required information from the log files. According to the need, anyone in your team can run it or schedule it to run through a “cron” job. Consider the below log example, the command, and the generated output once the command is run:

[INFO] - Initiating Test Suite
[INFO] - Running tests in Browser: Chrome
[WARN] - A warning message
[INFO] - Chromedriver path: /usr/bin/chromedriver
[INFO] - Launched URL: https://www.google.com
[ERROR] - Assertion Error: Expected Value[search1] not equal to Actual Value[search]
[INFO] - Running tests in Browser: Firefox
[INFO] - Launched URL: https://www.google.com
[ERROR] - Assertion Error: Expected Value[search2] not equal to Actual Value[search]
[ERROR] - Assertion Error: Expected Value[1] not equal to Actual Value[2]
[INFO] - Product being searched: IPhone
[INFO] - Product being searched: Computer
[ERROR] - Assertion Error: Expected Value[iPhone11] not equal to Actual Value[iPhone]
[WARN] - A warning message

grep ERROR Execution.log | awk –F' - ' '{ print $2 }'

Assertion Error: Expected Value[search1] not equal to Actual Value[search]
Assertion Error: Expected Value[search2] not equal to Actual Value[search]
Assertion Error: Expected Value[1] not equal to Actual Value[2]
Assertion Error: Expected Value[iPhone11] not equal to Actual Value[iPhone]

The log file “Execution.log” consists of sample test execution logs generated during the run. There may be requirements where you need to frequently find the description of only the errors from large log files similar to this.

You can automate this task by writing a script with the “grep” filter command, which here, extracts the lines having the word “ERROR” in them. The output is then sent to the “awk” filter command which applies the delimiter “ – ” and splits the output into 2 fields. The second field giving the intended error information. Similar automation scripts can also be written to extract test data or reports from machines.

Check server connectivity

For running your tests, you might be using several server machines having minimal resources. You don’t want your tests to fail because of connectivity issues for sure.

To address the connectivity issues beforehand, you may include the IP addresses of all the server systems in a single file (in the example, the file name is “remote_hosts”) and then run the below sample automation script to frequently check the connectivity or to check the connectivity before the tests start to run on those machines.

#!/bin/bash

# Description: This script will ping multiple machines and display the status

echo "************ Start of Script ************"
hosts="/home/sumon/scripts/remote_hosts"
for ip in $(cat $hosts)
do
ping –c1 $ip &> /dev/null
if [ $? –eq 0 ]
then
echo $ip : Connected
else
echo $ip : Disconnected
fi
done
echo "************ End of Script ************"

Notify when the number of files exceeds

In absence of any CI server like Jenkins, I have observed instances where the machines which run the tests, gather the post-execution files and keep on storing them indefinitely. Those files are sometimes used to generate some kind of report or stored just as historical evidence of the test run. But sometimes the number of those stored files becomes so large that disk space issue arises.

One of the effective ways to deal with such a situation is to write an automation script that will run in a scheduled manner, check for the number of files in the location, and send an email notification to the team if the number of files exceeds a certain threshold decided by the team.

Once the threshold level is reached, the automation script can also remove some or all of the old/unused files. Below is one such automation example where files older than 30 days are identified, deleted and mail sent:

#!/bin/bash

# Description: This script will find all the files older than 30 days, delete them and send mail

echo
echo "************ Start of Script ************"
fileCount=`find /home/sumon/practice/ -mtime +30 –exec ls –l {} \; | wc -l`
if [ $fileCount –gt 0 ]
then
echo "Number of file(s) older than 30 days: $fileCount"
echo "Number of file(s) older than 30 days: $fileCount" | mail –s "Number of files exceeded the threshold" <email-ID>
echo
echo "File details: "
find /home/sumon/practice/ -mtime +30 –exec ls –l {} \;
echo
echo "Removing file(s) older than 30 days…"
find /home/sumon/practice –mtime +30 –exec rm {} \;
echo "File(s) Removed"
else
echo "No files are older than 30 days"
fi
echo "************ End of Script ************"
echo

Check process status

The last of our automation examples is when you would want to kill defunct processes automatically. If a system has lots of defunct processes and a certain high level is reached, then new processes will not get launched. This may impact your testing of applications which may depend on multiple processes to get launched and run.

Surely you would not like to kill the defunct processes manually one by one. A one-liner automation script can come to your rescue in such a situation. You may share the script with your team members and they will love it.

#!/bin/bash

ps –ef | grep "<command>" | grep –v grep | awk '{print %2}' | xargs –I{} kill {}
echo "All defunct processes are killed"

At first, this script will check the status of all the active processes using the “ps” command. “ps -f” will get all running PIDs (process IDs) information. The output will then be passed to the “grep” filter as input. The “grep” filter will look for the provided <command> processes in the running process list.

Its output will then be sent as an input to “grep –v grep” which will omit the lines having the string “grep” in them (since “grep” will catch “grep” itself in the previous action). “awk” command will then take that input and send the 2nd field (the PID) of the process to “xargs –I{} kill {}” which will ultimately kill the PID, and thus the process.

Conclusions

In the above examples, I have scratched just the surface of automation examples in the vast automation world that you can make use of in your daily activities as a tester. All these examples can be highly optimized, customized, and extended according to your requirement.

Some of the other areas where automation can be very useful are –

  • Installation/un-installation of apps
  • Inspection of app package ids
  • Environment setup activity
  • Creating/renaming files
  • Assigning file and folder permissions
  • Copying files to remote host servers
  • Setting/un-setting environment variables
  • Test data creation/import/export
  • Data or metrics aggregation for analysis
  • Any backup activity

and the list is endless…

The main goal of this article was not to go deep into the scripting skill/commands but to demonstrate how small automation scripts can be useful for simple repetitive tasks using real-life examples. A reusable automation script can save you and your team a significant amount of time for tasks that you might be doing manually and repeatedly.

After all, who would not like to spend time doing valuable, innovative, critical, deep thinking tasks than spending time on boring time-intensive activities?