Bits of Learning

Learning sometimes happens in big jumps, but mostly in little tiny steps. I share my baby steps of learning here, mostly on topics around programming, programming languages, software engineering, and computing in general. But occasionally, even on other disciplines of engineering or even science. I mostly learn through examples and doing. And this place is a logbook of my experiences in learning something. You may find several things interesting here: little cute snippets of (hopefully useful) code, a bit of backing theory, and a lot of gyan on how learning can be so much fun.

Friday, March 16, 2007

A Small Test Automation System

Here I describe a small test automation system that has come in handy for me. It's very crude and would obviously work for very small scale individual level software development. The kind of software it would work for are those which take an input in the form of a file or from standard input, and output it into the standard output. In particular language translators. However, I am sure that it covers a very broad ground. And a simple constraint of having to build your translator, so it is testable by this kind of testing system will automatically result in good programming practice. I can guarantee that it has yielded some bit of productivity rise for me, a significant increase in correctness as testing and bug catching was easier and hence done more freely, exhaustively and frequently, and hell lot of fun!

We create a directory named test in the directory where the program executable (let's call it prog) is placed. In this directory we create the following directories:
* input : The directory which contains all the inputs of the test cases
* output : The directory where the test harness will dump the outputs of running the prog on each test into a separate file of the same name (possibly with file name extension .out)
* expect : The directory where the expected output of each test case is placed in a separate file of the same name
(possibly with file name extension .exp)
* description : The directory where the description of each test case is placed in a separate file of the same name (possibly with file name extension .desc)

We work with the following scripts (written in your favourite scripting language). They are the following:
- createtest : This script asks for a test case name and creates the same. It will look for the input file of the same in the input directory, and will run the prog on it, dumping the output into a file of the same name in the expect directory after getting the user's consent about the correctness of the generated output.
- testTestCase : This script takes as an input a test case name, runs prog on the corresponding input file in input directory, and dumps the output into the output directory. Then it does a simple unix diff between the expected output (the file of the same name in the expect directory), and generated output (the file of the same name in the output directory). It plants the PASS or FAIL verdict into a file (with .log extension) into the current working directory.
- testTestSuite : This script takes as an input the name of a test suite file. The test suite file should contain the names of all the test cases to be tested in the test suite. The testTestSuite runs similar to testTestCase script on all the test cases. It plants its PASS or FAIL verdict for each test case into a file of the same name as the test suite.



createTest.sh
#/bin/sh

inputdir="./${1}/input/";
outputdir="./${1}/output/";
expectdir="./${1}/expect/";
descriptiondir="./${1}/description/";

if [ $# -ne 2 ]
then
echo "Usage - $0 app-name test-case"
exit 1
fi

testcasename=$2

ls ${descriptiondir}${testcasename}.desc
if [ "$?" = "0" ]
then
echo "Current Description: `cat ${descriptiondir}${testcasename}.desc`"
echo "Do you want to change the description? (y / n)"
read isNewDesc
if [ "$isNewDesc" = "y" ]
then
`rm ${descriptiondir}${testcasename}.desc`
grep "\/\/" ${inputdir}${testcasename}.kc >> ${descriptiondir}${testcasename}.desc
fi
else
grep "\/\/" ${inputdir}${testcasename}.kc >> ${descriptiondir}${testcasename}.desc
fi

ls ${inputdir}${testcasename}.kc
if [ "$?" = "0" ]
then
echo "Current Input: `cat ${inputdir}${testcasename}.kc`"
echo "do you want to change the input? (y / n)"
read isNewInput
if [ "$isNewInput" = "y" ]
then
`rm ${inputdir}${testcasename}.kc`
unset f
echo "type the test input data (to end the input, type 'eof' in the line following the last input line):"
while :
do
read f
if [ "$f" == eof ]
then
echo "Input data done"
break
fi
echo $f >> ${inputdir}${testcasename}.kc
done
fi
else
unset f
echo "type the test input data (to end the input, type 'eof' in the line following the last input line):"
while :
do
read f
if [ "$f" == eof ]
then
echo "Input data done"
break
fi
echo $f >> ${inputdir}${testcasename}.kc
done
fi

cat ${inputdir}${testcasename}.kc | ../${1} > ${expecteddir}${testcasename}.exp

./viewtest.sh $1 $testcasenam

testTestCase.sh
#/bin/sh

if [ $# -ne 2 ]
then
echo "Usage - $0 app-name test-case"
exit 1
fi

inputdir="./${1}/input/";
outputdir="./${1}/output/";
expectdir="./${1}/expect/";

CurrentIn="${inputdir}${1}.kc"
CurrentOut="${outputdir}${1}.out"
echo $CurrentIn
echo $CurrentOut

cat $CurrentIn | ../${1} > $CurrentOut

echo "Test result for application ${1} test-case ${2}"
CurrentExpected="${expectdir}${2}.exp"
CurrentOut="${outputdir}${2}.out"
echo "Comparing $CurrentExpected and $CurrentOut"
diff $CurrentExpected $CurrentOut > ${1}.${2}.log
if [ "$?" != "0" ]
then
echo "Test case $2: FAILED!"
else
echo "Test case $2: PASSED!"
fi


testTestSuite.sh

#/bin/sh

if [ $# -ne 2 ]
then
echo "Usage - $0 app-name test-suite"
exit 1
fi

inputdir="./${1}/input/";
outputdir="./${1}/output/";
expectdir="./${1}/expect/";

echo "testing application ${1} on test-suite ${2}"
while read f
do
CurrentIn="${inputdir}${f}.kc"
CurrentOut="${outputdir}${f}.out"
echo $CurrentIn
echo $CurrentOut
cat $CurrentIn | ../${1} > $CurrentOut
done < $2

if [ `ls ${2}.log` ]
then
rm ${2}.log
fi

echo "Test result for test-suite ${f}"
while read f
do
CurrentExpected="${expectdir}${f}.exp"
CurrentOut="${outputdir}${f}.out"
echo "Comparing $CurrentExpected and $CurrentOut"
diff $CurrentExpected $CurrentOut > temp
if [ "$?" != "0" ]
then
echo "Test case $f: FAILED!" >> ${1}/${2}.log
else
echo "Test case $f: PASSED!" >> ${1}/${2}.log
fi
done < $2
rm temp



A test suite
1
2
3
4
5
6
7
8
9
10
11
12
19
20
21
22
23
24
25
26

A test verdict:
Test case 1: PASSED!
Test case 2: PASSED!
Test case 3: PASSED!
Test case 4: PASSED!
Test case 5: PASSED!
Test case 6: PASSED!
Test case 7: PASSED!
Test case 8: PASSED!
Test case 9: PASSED!
Test case 10: PASSED!
Test case 11: PASSED!
Test case 12: PASSED!
Test case 19: PASSED!
Test case 20: PASSED!
Test case 21: FAILED!
Test case 22: FAILED!
Test case 23: FAILED!
Test case 24: FAILED!
Test case 25: FAILED!
Test case 26: PASSED!

A download page for this tool (with a more up to date source code and instructions for use)

No comments: