There are two types of design decisions we always seem to make while building large programs:
Architectural
Algorithmic
Architectural design decisions are decisions related to modularisation, division of responsibility, information accessibility and such things. For example, in a typical object oriented implementation, they would translate into decisions about class design, namespaces, fields and methods of a classes, inheritance hierarchy etc.
Algorithmic design is the choice of data-structure and control flow.
I feel, documenting architectural design decisions is best done offline, off code, in separate place. On the other hand algorithmic decision related documentation is best placed close to its implementation, as code comments.
Just an observation. Nothing biblical about it.
Bits of Learning
Learning sometimes happens in big jumps, but mostly in little tiny steps. I share my baby steps of learning here, mostly on topics around programming, programming languages, software engineering, and computing in general. But occasionally, even on other disciplines of engineering or even science. I mostly learn through examples and doing. And this place is a logbook of my experiences in learning something. You may find several things interesting here: little cute snippets of (hopefully useful) code, a bit of backing theory, and a lot of gyan on how learning can be so much fun.
Saturday, December 02, 2006
Monday, August 21, 2006
MS Ramaiah Polytechnic Talk
Last Saturday, I got a good experience of giving a lecture on 'Advances in Software Testing' in MS Ramaiah Polytechnic.
The invitation came from Mr. T. Shankar, a scientific officer in our department who was also one of the organisers of the Training the Trainers workshop in February this year. I had given a talk in that workshop which was received quite well.
While I was in middle of preparing for the lecture, I got this input from several sources that I could expect a fairly frigid audience. It was quite demoralising as I was hoping to present something really different than the usual stuff. It was quite ambitious and I was quite excited in the beginning. However, thinking about the possible response I would get for all that effort, I got quite psyched. I aborted the slide preparation in middle, and decided to make the whole thing impromptu. Going by what I expected, the lecture would be around 35-40 minutes long.
However on reaching there my experiences turned out to be quite to the contrary. I was warmly welcomed by the principal of the college. On entering the lecture hall I found a bunch of bringht young faces waiting for me. Quite gratifyingly, I could strike up a comfortable and friendly interacative conversation with the students. The lecture, that I thought wouldn't survive beyond 30-40 minutes, flowed smoothly for nearly two hours! It was such a terrific experience to evoke so much interactiveness with the students who'd been alleged to be uninterested. I felt like having won a victory in a battle. Added to this pure joy, my eog was also gratified as I was given a very generous thanks and a memento -- a clock, a bouquet, and a shawl. I felt really honoured.
Here are the slides.
The invitation came from Mr. T. Shankar, a scientific officer in our department who was also one of the organisers of the Training the Trainers workshop in February this year. I had given a talk in that workshop which was received quite well.
While I was in middle of preparing for the lecture, I got this input from several sources that I could expect a fairly frigid audience. It was quite demoralising as I was hoping to present something really different than the usual stuff. It was quite ambitious and I was quite excited in the beginning. However, thinking about the possible response I would get for all that effort, I got quite psyched. I aborted the slide preparation in middle, and decided to make the whole thing impromptu. Going by what I expected, the lecture would be around 35-40 minutes long.
However on reaching there my experiences turned out to be quite to the contrary. I was warmly welcomed by the principal of the college. On entering the lecture hall I found a bunch of bringht young faces waiting for me. Quite gratifyingly, I could strike up a comfortable and friendly interacative conversation with the students. The lecture, that I thought wouldn't survive beyond 30-40 minutes, flowed smoothly for nearly two hours! It was such a terrific experience to evoke so much interactiveness with the students who'd been alleged to be uninterested. I felt like having won a victory in a battle. Added to this pure joy, my eog was also gratified as I was given a very generous thanks and a memento -- a clock, a bouquet, and a shawl. I felt really honoured.
Here are the slides.
Monday, August 07, 2006
Orientation talk
As every year, there's going on the orientation programme for the new comers to Computer Science and Automation department in IISc. It follows the usual format of introducing the new students to various aspects of the department : coursework, research, labs, computing environments, administrative procedures etc.
I presented a talk on Software Engineering in CSA this thursday.
Here's the link to the slides.
It was late in the evening just before dinner. The audience looked tired, majority of which were the organisers, and very few new students. I had worked pretty hard in preparing those slides. But I got a very uninterested look in the faces of the members of the audience. I scurried through the talk in a resembling mood!
Nevertheless, I always look forward to talking to new students.
I presented a talk on Software Engineering in CSA this thursday.
Here's the link to the slides.
It was late in the evening just before dinner. The audience looked tired, majority of which were the organisers, and very few new students. I had worked pretty hard in preparing those slides. But I got a very uninterested look in the faces of the members of the audience. I scurried through the talk in a resembling mood!
Nevertheless, I always look forward to talking to new students.
JRE plugin for Firefox
To run applets, we need the Java Runtime plugin for the browser. The Java Runtime Environment (JRE) plugin for mozilla is available alongwith the JRE installation. For example, if the JRE is installed in the following location:
/usr/java/j2re1.4.2_10/
Then go it its plugin directory. You may find such directories:
ns4/ ns610/ ns610-gcc32/
All of them contain a file named libjavaplugin.so.
One of these is the shared object plugin that you want.
Now go to .mozilla/plugins/ directory in your home.
create a soft link of the above file here:
ln -s /usr/java/j2re1.4.2_10/plugin/i386/ns4/libjavaplugin.so .
If this happens to be the right plugin your firefox will now be able to run applets without trouble. If it's not, firefox will not run. If that happens just remove the
soft link:
unlink libjavaplugin.so from the mozilla/plugins directory and try the libjavaplugin.so files in the other directories in /usr/java/j2re1.4.2_10/plugin/i386/.
One of them should work.
/usr/java/j2re1.4.2_10/
Then go it its plugin directory. You may find such directories:
ns4/ ns610/ ns610-gcc32/
All of them contain a file named libjavaplugin.so.
One of these is the shared object plugin that you want.
Now go to .mozilla/plugins/ directory in your home.
create a soft link of the above file here:
ln -s /usr/java/j2re1.4.2_10/plugin/i386/ns4/libjavaplugin.so .
If this happens to be the right plugin your firefox will now be able to run applets without trouble. If it's not, firefox will not run. If that happens just remove the
soft link:
unlink libjavaplugin.so from the mozilla/plugins directory and try the libjavaplugin.so files in the other directories in /usr/java/j2re1.4.2_10/plugin/i386/.
One of them should work.
Thursday, June 01, 2006
scp not working
Other machines were not able to connect to my laptop, either through scp or ssh.
The service called nifd which 'is a daemon which runs on Howl clients to monitor the state of a network interace. nifd must be running on systems that use autoidp and mDNSResponder to automatically obtain a Link-Local IPv4 address and do Zeroconf service discovery. nifd should not be running otherwise.'
Now whatever that means. The last sentence was vital. That service needed to be stopped. I stopped it. And the scp started working.
The service called nifd which 'is a daemon which runs on Howl clients to monitor the state of a network interace. nifd must be running on systems that use autoidp and mDNSResponder to automatically obtain a Link-Local IPv4 address and do Zeroconf service discovery. nifd should not be running otherwise.'
Now whatever that means. The last sentence was vital. That service needed to be stopped. I stopped it. And the scp started working.
Sunday, May 28, 2006
CVS Repair
Today when I tried to commit the testing directory to cvs, there was a strange problem coming up. All commands including cvs commit were failing when cvs was trying to go into a directory demo/input/api1. It would report no such directory existed and would abort.
I realised that initially the directory testing/demo/data/input/ was created as testing/demo/input and was cvs added. Later when I realised that I would like to divide my data as input and output and intermediate. So, moved the whole directory into testing/demo/inputdemo/data after creating it. Since the CVS directory named testing/demo/input was already there in the CVS repository, this move of a directory created the problem. I should perhaps have followed the following sequence to create the whole thing:
mkdir testing/demo/data
cp -r testing/demo/input testing/demo/data
rm -f testing/demo/input/*
cvs remove testing/demo/input/*
cvs remove testing/demo/input
rm -rf testing/demo/data/input/api1/CVS
rm -rf testing/demo/data/input/CVS
cd testing/demo/data/input/api1
cvs add *
cd ..
cvs add api1
cd ..
cvs add input
OK! That's quite a long process. I don't think it would scale for a more complicated shift of directories within the working directory. I am sure there's another better way to do it. Nevertheless...
So, once the mistake was done (that is, shifting the directory without following the above sequence), I was persistently getting the above errors. I finally managed to get rid of it by the following process:
Went to the CVSROOT and saw that there's a directory testing/demo/input.
I moved it to testing/demo/data/
Then I came to the working directory, and went to the testing/demo/data/input/CVS
I opened the Repository file. In it, I saw that the path given was testing/demo/input. I changed it to testing/demo/data/input. I went to all the directories contained in testing/demo/data/ and changed this erroneous pointer to the repository.
I also edited the Entries file of the testing/demo/CVS directory to remove the 'input' entry from it. This now fixed the problem of cvs trying to look for this this directory in the testing/demo directory. I had to remove some spurious entries in the Entries files of one or two CVS directories withing this path.
This fixed the problems. My CVS commands are now working fine.
Related Blog:
Working with CVS
I realised that initially the directory testing/demo/data/input/ was created as testing/demo/input and was cvs added. Later when I realised that I would like to divide my data as input and output and intermediate. So, moved the whole directory into testing/demo/inputdemo/data after creating it. Since the CVS directory named testing/demo/input was already there in the CVS repository, this move of a directory created the problem. I should perhaps have followed the following sequence to create the whole thing:
mkdir testing/demo/data
cp -r testing/demo/input testing/demo/data
rm -f testing/demo/input/*
cvs remove testing/demo/input/*
cvs remove testing/demo/input
rm -rf testing/demo/data/input/api1/CVS
rm -rf testing/demo/data/input/CVS
cd testing/demo/data/input/api1
cvs add *
cd ..
cvs add api1
cd ..
cvs add input
OK! That's quite a long process. I don't think it would scale for a more complicated shift of directories within the working directory. I am sure there's another better way to do it. Nevertheless...
So, once the mistake was done (that is, shifting the directory without following the above sequence), I was persistently getting the above errors. I finally managed to get rid of it by the following process:
Went to the CVSROOT and saw that there's a directory testing/demo/input.
I moved it to testing/demo/data/
Then I came to the working directory, and went to the testing/demo/data/input/CVS
I opened the Repository file. In it, I saw that the path given was testing/demo/input. I changed it to testing/demo/data/input. I went to all the directories contained in testing/demo/data/ and changed this erroneous pointer to the repository.
I also edited the Entries file of the testing/demo/CVS directory to remove the 'input' entry from it. This now fixed the problem of cvs trying to look for this this directory in the testing/demo directory. I had to remove some spurious entries in the Entries files of one or two CVS directories withing this path.
This fixed the problems. My CVS commands are now working fine.
Related Blog:
Working with CVS
Wednesday, March 29, 2006
Good Interfaces Is Good for Doing Good Experiments
Perhaps it's a truth that I spend so much effort in giving a professional structure and interface to the prototypes I make simply because I love doing it that way. But, I have seen that it does yield some practical benefits too.
In the couple of days I have spent as much time introducing elements of minimal usability, like command options, and complete end-to-end execution with a single command.
Now when I am actually collecting the experimental data, I am able to do it almost completely automatically using a simple perl script to invoke the tool with the right command line inputs.
Of course, the process involved in designing an automation of an experiment is fairly complex and requires insight regarding the requirement. What are the figures we are are trying to measure? This question may have significant effect on the manner in which the system under test is designed.
This blog will incorporate some of my findings at a high level from designing the experiments for my method.
In short, I was to draw comparison between a specification based regression testing method which I call 'explicit state space enumeration' or ESSE method and another method called 'Legal Random Calls' or LRC method.
In the couple of days I have spent as much time introducing elements of minimal usability, like command options, and complete end-to-end execution with a single command.
Now when I am actually collecting the experimental data, I am able to do it almost completely automatically using a simple perl script to invoke the tool with the right command line inputs.
Of course, the process involved in designing an automation of an experiment is fairly complex and requires insight regarding the requirement. What are the figures we are are trying to measure? This question may have significant effect on the manner in which the system under test is designed.
This blog will incorporate some of my findings at a high level from designing the experiments for my method.
In short, I was to draw comparison between a specification based regression testing method which I call 'explicit state space enumeration' or ESSE method and another method called 'Legal Random Calls' or LRC method.
Sunday, March 26, 2006
Separate Parsers in The Same Application
It might often happen that you would like to have two parsers coexist in the same program. Here's an example of this situation in my current implementation of Modest -- the Model Based Testing tool.
There're the following two modules:
cg : This reads API specifications (written in a language, say, A) from a spec file. It then generates the GraphMaker code. This, when built and executed, will generate the state space graph (written in a language, say, B) of the given application.
pf : This reads the state space graph, again written in B, reads test specifications, written in a language, say, C, and computes the test sequences.
We observe that there are three languages to be recognised -- The API specification language A, the graph description language B, and the test specification language C. We need parsers for all three of them. Incidentally, it our case, B = C (in context-free grammatical sense). However, the data-structure into which they are read is different. Hence different parsers are anyway required. But the lexical analyser for both B and C is the same.
Say the lexical analyser for A, B and C are l(A), l(B) and l(C), and let the syntax analysers be p(A), p(B), and p(C) respectively. I used yacc (in fact bison) to write the specs for p(A). I hand-coded p(B) and p(C).
l(A) and l(B) were written in lex (in fact flex). And, as mentioned above, l(B) = l(C).
Initially cg and pf were developed separately. Hence, the parsers and the lexical analysers didn't interfere with each. However, when I tried integrating them into Modest, I ran into trouble due to the following:
1. Name Conflicts among globals
--------------------------------------------
When I did flex(Vocab (A)), it generated the lexical analyser function yylex(), which is global. Similarly when I did flex(Vocab(B)), it too generated a lexical analyser function yylex(). Both global, and hence, while linking, gave redefinition error.
Solution:
As mentioned above, the default name of the lexical analyser function generated by flex is yylex(). Similarly, the default name of the syntax analyser function generated by bison is yyparse(). Both these names can, however, be changed with the following.
Running flex as follows:
flex -Pprefixname inputfilename.flex
will generate lexical analyser function with the name prefixnamelex() instead of yylex().
Similarly running bison as follows:
bison --name-prefix prefixname inputfilename.yy
will generate the syntax analyser function with the name prefixnameparse() instead of yyparse(). Corresponding changes happen to many important tokens in the generated parser. For instance, the calls to yylex() in the generated code will all now be to prefixnamelex(). Hence, it is necessary to have the prefixname same for both the flex and the bison commands, so that the linker finds the prefixnamelex() function that the prefixnameparse() functions calls.
This solves the name conflict problems for the lexical analyser and syntax analyser functions for more than than one analysers in the same program. The name coflicts arising between other globals that you might have created can be easily resolved by encapsulating them into namespaces of those modules (I am assuming C++).
2. Name conflict between the input source file-pointer
--------------------------------------------------------------------------
The way to direct flex to generate a lexical analyser that reads from a file pointer of a particular name, say fin, is to embed the following preprocessor directive in the flex input file:
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, fin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
For example, for cg, the above was
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, cgfin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
And for pf, it was
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, pffin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
Of course, it's our responsibility that the lexical analyser finds this FILE * open when it tries to read from it.
There're the following two modules:
cg : This reads API specifications (written in a language, say, A) from a spec file. It then generates the GraphMaker code. This, when built and executed, will generate the state space graph (written in a language, say, B) of the given application.
pf : This reads the state space graph, again written in B, reads test specifications, written in a language, say, C, and computes the test sequences.
We observe that there are three languages to be recognised -- The API specification language A, the graph description language B, and the test specification language C. We need parsers for all three of them. Incidentally, it our case, B = C (in context-free grammatical sense). However, the data-structure into which they are read is different. Hence different parsers are anyway required. But the lexical analyser for both B and C is the same.
Say the lexical analyser for A, B and C are l(A), l(B) and l(C), and let the syntax analysers be p(A), p(B), and p(C) respectively. I used yacc (in fact bison) to write the specs for p(A). I hand-coded p(B) and p(C).
l(A) and l(B) were written in lex (in fact flex). And, as mentioned above, l(B) = l(C).
Initially cg and pf were developed separately. Hence, the parsers and the lexical analysers didn't interfere with each. However, when I tried integrating them into Modest, I ran into trouble due to the following:
1. Name Conflicts among globals
--------------------------------------------
When I did flex(Vocab (A)), it generated the lexical analyser function yylex(), which is global. Similarly when I did flex(Vocab(B)), it too generated a lexical analyser function yylex(). Both global, and hence, while linking, gave redefinition error.
Solution:
As mentioned above, the default name of the lexical analyser function generated by flex is yylex(). Similarly, the default name of the syntax analyser function generated by bison is yyparse(). Both these names can, however, be changed with the following.
Running flex as follows:
flex -Pprefixname inputfilename.flex
will generate lexical analyser function with the name prefixnamelex() instead of yylex().
Similarly running bison as follows:
bison --name-prefix prefixname inputfilename.yy
will generate the syntax analyser function with the name prefixnameparse() instead of yyparse(). Corresponding changes happen to many important tokens in the generated parser. For instance, the calls to yylex() in the generated code will all now be to prefixnamelex(). Hence, it is necessary to have the prefixname same for both the flex and the bison commands, so that the linker finds the prefixnamelex() function that the prefixnameparse() functions calls.
This solves the name conflict problems for the lexical analyser and syntax analyser functions for more than than one analysers in the same program. The name coflicts arising between other globals that you might have created can be easily resolved by encapsulating them into namespaces of those modules (I am assuming C++).
2. Name conflict between the input source file-pointer
--------------------------------------------------------------------------
The way to direct flex to generate a lexical analyser that reads from a file pointer of a particular name, say fin, is to embed the following preprocessor directive in the flex input file:
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, fin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
For example, for cg, the above was
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, cgfin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
And for pf, it was
#undef YY_INPUT
#define YY_INPUT(buf,result,max_size) \
if ( (result = fread( (char*)buf, sizeof(char), max_size, pffin)) <>
YY_FATAL_ERROR( "read() in flex scanner failed");
Of course, it's our responsibility that the lexical analyser finds this FILE * open when it tries to read from it.
Friday, March 17, 2006
Working with CVS
I am facing the versioning issues in proper sense for the first time. Or may be for the first time I trying to solve these problems in that fashion.
The testing directory in my CVS repository contains two tags: after-philips(branch) and with-arguments(main trunk). The after-philips tag is what I had at the end of the Philips Research internship of the last year. with-arguments contains the work that contains the addition that I did afterwards, mainly, the capability of cg to generate code from API specs. with functions taking arguments.
Now, the problem starts. In my funcoding directory, I have a testing directory. This contains the development I was doing in order to incorporate the pointers feature to the API language. This code was checked out from the with-arguments revision tag. Basically the objective of this development is to handle functions accepting pointer arguments. Multiple values are returned through these arguments, and they figure in the preconditions and postconditions of the API functions. That work ran into sticky implementation issues. I digressed from that to do this paper-writing work.
Now, I am supposed to create some results which will hopefully be incorporated in the paper. So, I can checkout yet another copy from with-arguments revision tag. However, I am anticipating that this will contain a series of checkins which I don't want to interfere with my with-pointer(no such tag actually exists in the repository. Here I am refering to it just for explanation sake) branch.
Solution: I created a branch icsm06-demo. And checked out a local copy from this branch. So my tinkering with this branch will keep my with-pointers work unaffected.
Currently, I am not able to foresee if some of this work will need to be merged with the with-pointers branch. We will see later!
The testing directory in my CVS repository contains two tags: after-philips(branch) and with-arguments(main trunk). The after-philips tag is what I had at the end of the Philips Research internship of the last year. with-arguments contains the work that contains the addition that I did afterwards, mainly, the capability of cg to generate code from API specs. with functions taking arguments.
Now, the problem starts. In my funcoding directory, I have a testing directory. This contains the development I was doing in order to incorporate the pointers feature to the API language. This code was checked out from the with-arguments revision tag. Basically the objective of this development is to handle functions accepting pointer arguments. Multiple values are returned through these arguments, and they figure in the preconditions and postconditions of the API functions. That work ran into sticky implementation issues. I digressed from that to do this paper-writing work.
Now, I am supposed to create some results which will hopefully be incorporated in the paper. So, I can checkout yet another copy from with-arguments revision tag. However, I am anticipating that this will contain a series of checkins which I don't want to interfere with my with-pointer(no such tag actually exists in the repository. Here I am refering to it just for explanation sake) branch.
Solution: I created a branch icsm06-demo. And checked out a local copy from this branch. So my tinkering with this branch will keep my with-pointers work unaffected.
Currently, I am not able to foresee if some of this work will need to be merged with the with-pointers branch. We will see later!
Writing algorithms in latex
There're a number of ways in which algorithms can be written in a latex article.
Option 1
One way is to use the \verb command. That would give a type written look to the algorithm. May be OK to use that for small code snippets. But it is very inflexible. In total, it's not advised to use this method.
Option 2
Use listings package. It can be got from here. Please check. It usually comes prepackaged in latex installations. So, it may already be there on your machine. listings is very versatile, giving you the facility to include code snippets of many languages (C, C++, Pascal, pseudocode, HTML...). You can include source-code directly from an external file. You can also inline that. And you could add it right as a part of your regular text.
However, listings appears more appropriate only for inserting code snippets, and not algorithms. Well, I don't think there's any inherent limitation, since it seems to be a very stable package. But, option 3 seems more appropriate for algorithms.
Option 3
To write proper algorithms, one should use one or more of the above. My colleagues seem to prefer a combination of algorithm and algorithmic. Both come bundled in the same package that can be downloaded from here.
A usual way is to nest algorithmic inside algorithm. The latex code will look somewhat like:
\begin{algorithm}
\begin{algorithmic}
...
your algorithm
...
\end {algorithmic}
\end {algorithm}
However, this seems to have a drawback from what I observed. algorithmic doesn't seem to have a way of having more than one procedure in a single algorithm. And also of invoking other procedures.
That problem gets mitigated if we replace algorithmic with algo. It has got function calls, and multiple procedures. I prefer algorithm, algo combination the best.
Please note that when you are using algo, you must exclude algorithmic. Using both packages together, as in:
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{algo}
seems to have some problems. Notably, if you are using algo to write your algorithm, the indentations will disappear if the algorithmic package is used. Just comment out that above line:
%\usepackage{algorithmic}
However, using algo.sty has a severe problem. There seem to be many versions of it available on the web which unfortunately seem to have originated from completely different sources, and therefore are incompatible with each other. In fact, I have lost track of the source and accompanying documentation of the version I am currently using. There's one version available here. I am planning to shift to that next time on.
Option 1
One way is to use the \verb command. That would give a type written look to the algorithm. May be OK to use that for small code snippets. But it is very inflexible. In total, it's not advised to use this method.
Option 2
Use listings package. It can be got from here. Please check. It usually comes prepackaged in latex installations. So, it may already be there on your machine. listings is very versatile, giving you the facility to include code snippets of many languages (C, C++, Pascal, pseudocode, HTML...). You can include source-code directly from an external file. You can also inline that. And you could add it right as a part of your regular text.
However, listings appears more appropriate only for inserting code snippets, and not algorithms. Well, I don't think there's any inherent limitation, since it seems to be a very stable package. But, option 3 seems more appropriate for algorithms.
Option 3
To write proper algorithms, one should use one or more of the above. My colleagues seem to prefer a combination of algorithm and algorithmic. Both come bundled in the same package that can be downloaded from here.
A usual way is to nest algorithmic inside algorithm. The latex code will look somewhat like:
\begin{algorithm}
\begin{algorithmic}
...
your algorithm
...
\end
However, this seems to have a drawback from what I observed. algorithmic doesn't seem to have a way of having more than one procedure in a single algorithm. And also of invoking other procedures.
That problem gets mitigated if we replace algorithmic with algo. It has got function calls, and multiple procedures. I prefer algorithm, algo combination the best.
Please note that when you are using algo, you must exclude algorithmic. Using both packages together, as in:
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{algo}
seems to have some problems. Notably, if you are using algo to write your algorithm, the indentations will disappear if the algorithmic package is used. Just comment out that above line:
%\usepackage{algorithmic}
However, using algo.sty has a severe problem. There seem to be many versions of it available on the web which unfortunately seem to have originated from completely different sources, and therefore are incompatible with each other. In fact, I have lost track of the source and accompanying documentation of the version I am currently using. There's one version available here. I am planning to shift to that next time on.
Saturday, February 25, 2006
Installing beamer
The beamer class for making stylish latex presentations is available here. Once the tar.gz file is downloaded and untarred. In the doc/ directory, the beameruserguide.pdf describes in detail the installation process. The salient points are reproduced here:
Switch user to root.
In the texmf directory (in my case it was /usr/share/texmf/)(and let's call it $(texmfdir), find directory tex/.
in tex/, find directory latex/. If it's not there create it.
cd latex/
mkdir beamer/
mkdir xcolor/
mkdir pgf/
Copy all files in the beamer directory (created on untarring the tarball) to:
$(texmfdir)/tex/latex/beamer.
Find pgf package on your machine. If it is not there, download from here.
Untar:
tar -xvzf pgf-1.00.tar.gz
Switch user to root.
Copy all files in the pgf-1.00 to $(texmfdir)/tex/latex/pgf/
Similarly download the xcolor package from here.
Unzip:
unzip xcolor.zip
cd xcolor/
As per the installation instructions found in README, do the following:
latex xcolor.ins
Switch user to root.
cp *.sty $(texmfdir)/tex/latex/xcolor/
cp *.def $(texmfdir)/tex/latex/xcolor/
mkdir $(texmfdir)/dvips/xcolor
cp *.pro $(texmfdir)/dvips/xcolor
Finally update the tex database by:
texhash
That's it. You should be ready to go!
Switch user to root.
In the texmf directory (in my case it was /usr/share/texmf/)(and let's call it $(texmfdir), find directory tex/.
in tex/, find directory latex/. If it's not there create it.
cd latex/
mkdir beamer/
mkdir xcolor/
mkdir pgf/
Copy all files in the beamer directory (created on untarring the tarball) to:
$(texmfdir)/tex/latex/beamer.
Find pgf package on your machine. If it is not there, download from here.
Untar:
tar -xvzf pgf-1.00.tar.gz
Switch user to root.
Copy all files in the pgf-1.00 to $(texmfdir)/tex/latex/pgf/
Similarly download the xcolor package from here.
Unzip:
unzip xcolor.zip
cd xcolor/
As per the installation instructions found in README, do the following:
latex xcolor.ins
Switch user to root.
cp *.sty $(texmfdir)/tex/latex/xcolor/
cp *.def $(texmfdir)/tex/latex/xcolor/
mkdir $(texmfdir)/dvips/xcolor
cp *.pro $(texmfdir)/dvips/xcolor
Finally update the tex database by:
texhash
That's it. You should be ready to go!
Thursday, February 23, 2006
Burstling and Overcrowded; Solitary and Lonely
Last night, Kapil and I had a long discussion on the way research arena is for researchers in Software Engineering (my field) and those in Computer Architecture (Kapil's field). These words are excerpts and afterthoughts.
Computer Architecture is matured beyond measure. The outputs of the research in this field have found awesome success. Computers are good. Ya, there's this thing about unending demands. So, there's always a reason to have a better computer than the one we have. But frankly, this research field has delivered. It essentially consists of some practical kind of research, which does take help of complicated math, but in controlled measures. Its takers are big chip manufacturing firms investing billions on innovation. Each individual consumer is eager for new ideas. They come in large number; and a good number gets quickly consumed. Therefore, there has been very feverish research in the past couple of decades. Many people have crowded in. It looks that all that could be thought out, has been thought out. Not that nothing more is left to think. But whatever there is, has almost become obvious to everyone due to the maturity of the field. Hence, there're many researchers ready to pounce on a problem, the moment it appears. If you are a researcher in Computer Architecture, and you notice a problem, you can be sure that there would ten others all over the world who might have noticed the same problem and would have already started working hard on that. To make things worse, many of them might have greater resources than you, in terms of man power and experience and perhaps even sharpness. So, the bottom line : If you have got an idea, you better be quick to take it to the finish. For if you aren't quick, somebody else will surely be, and just when you are about to see light at the end of the tunnel, you will be hit with a bolt from the blue -- a paper coming out from some unknown competitor of yours.
Software Engineering as a research field is very different. Very unlike the beliefs of a lay person, research problems in this area are many, and working solutions, very few. The state of the practice uses archaic methods which further curdles the already messy problem space. Softwares are built and maintained at such breakneck speed that there's no good way of making online studies. Moreover, the problems in Software Engineering are mostly related to a number of -ilibilities as they call them. Maintenability, portability, testability etc. These are as of now immeasurable quality parameters. In absense of proper metrics, what can researchers in this area hope to improve? The field is not so matured as Computer Architecture. Consequently the research efforts are pretty scattered. One advantage of this is that an idea occuring to you has a significant chance of not having occured to you. Disadvantage: you don't have any benchmarks to test the goodness of an idea. If you sound too concrete, you could be blamed with proposing something trivial. If you are too abstract, you could be blamed with proposing something too wild, impractical. Worse: you may be charged with 'handwaving!' Problems are many in this field which are crying for good solutions. However solutions are nowhere close to really alleviating the pain that Software making as a practice is.
Kapil asked the question: 'What evidences occur in history when there was a dire need of a paradigm shift of thinking, and then an invention came and solved the problem.'
We thought, and within our limited knowledge couldn't come up with any such example. There are plenty of examples where there were good ideas, which were displayed just like that. And then they caught the attention of users, and they flourished. But no example could be recollected when a hitherto non-existing technology appeared in rescue of mankind from a pressing crisis. On thinking hard I feel there do exist solutions which involve clever adaptation of existing technology for providing a solution to a crisis. Additionally, I think some examples from the World War 2 could be found where a pure technological solution came in direct response to a military requirement, and it changed the history of the world. But no such technology which emerged in response to a crisis that's common to all. Well, that's the way things are are. Can't complain!
If software engineering research comes up with some breakthrough research results now, it will be an invention of that type -- one in direct response to a crisis. something that doesn't seem to have happened in visible history of science. It wouldn't perhaps be silly to assume that there's not going to be any such breakthrough after all very soon. Such breakthroughs seem to occurs in two extreme conditions: when there's perfect peace, and when there's war. The current scenario is neither of this. Of course, it doesn't seem to be growing any more peaceful every coming day. Perhaps, we'll soon have a war like condition, and then we will come out with the real solutions.
Computer Architecture is matured beyond measure. The outputs of the research in this field have found awesome success. Computers are good. Ya, there's this thing about unending demands. So, there's always a reason to have a better computer than the one we have. But frankly, this research field has delivered. It essentially consists of some practical kind of research, which does take help of complicated math, but in controlled measures. Its takers are big chip manufacturing firms investing billions on innovation. Each individual consumer is eager for new ideas. They come in large number; and a good number gets quickly consumed. Therefore, there has been very feverish research in the past couple of decades. Many people have crowded in. It looks that all that could be thought out, has been thought out. Not that nothing more is left to think. But whatever there is, has almost become obvious to everyone due to the maturity of the field. Hence, there're many researchers ready to pounce on a problem, the moment it appears. If you are a researcher in Computer Architecture, and you notice a problem, you can be sure that there would ten others all over the world who might have noticed the same problem and would have already started working hard on that. To make things worse, many of them might have greater resources than you, in terms of man power and experience and perhaps even sharpness. So, the bottom line : If you have got an idea, you better be quick to take it to the finish. For if you aren't quick, somebody else will surely be, and just when you are about to see light at the end of the tunnel, you will be hit with a bolt from the blue -- a paper coming out from some unknown competitor of yours.
Software Engineering as a research field is very different. Very unlike the beliefs of a lay person, research problems in this area are many, and working solutions, very few. The state of the practice uses archaic methods which further curdles the already messy problem space. Softwares are built and maintained at such breakneck speed that there's no good way of making online studies. Moreover, the problems in Software Engineering are mostly related to a number of -ilibilities as they call them. Maintenability, portability, testability etc. These are as of now immeasurable quality parameters. In absense of proper metrics, what can researchers in this area hope to improve? The field is not so matured as Computer Architecture. Consequently the research efforts are pretty scattered. One advantage of this is that an idea occuring to you has a significant chance of not having occured to you. Disadvantage: you don't have any benchmarks to test the goodness of an idea. If you sound too concrete, you could be blamed with proposing something trivial. If you are too abstract, you could be blamed with proposing something too wild, impractical. Worse: you may be charged with 'handwaving!' Problems are many in this field which are crying for good solutions. However solutions are nowhere close to really alleviating the pain that Software making as a practice is.
Kapil asked the question: 'What evidences occur in history when there was a dire need of a paradigm shift of thinking, and then an invention came and solved the problem.'
We thought, and within our limited knowledge couldn't come up with any such example. There are plenty of examples where there were good ideas, which were displayed just like that. And then they caught the attention of users, and they flourished. But no example could be recollected when a hitherto non-existing technology appeared in rescue of mankind from a pressing crisis. On thinking hard I feel there do exist solutions which involve clever adaptation of existing technology for providing a solution to a crisis. Additionally, I think some examples from the World War 2 could be found where a pure technological solution came in direct response to a military requirement, and it changed the history of the world. But no such technology which emerged in response to a crisis that's common to all. Well, that's the way things are are. Can't complain!
If software engineering research comes up with some breakthrough research results now, it will be an invention of that type -- one in direct response to a crisis. something that doesn't seem to have happened in visible history of science. It wouldn't perhaps be silly to assume that there's not going to be any such breakthrough after all very soon. Such breakthroughs seem to occurs in two extreme conditions: when there's perfect peace, and when there's war. The current scenario is neither of this. Of course, it doesn't seem to be growing any more peaceful every coming day. Perhaps, we'll soon have a war like condition, and then we will come out with the real solutions.
Friday, February 17, 2006
latex text in xfig images
This is required for aving professional latex type lettering in xfig figures.
The solution was found in this website.
I have downloaded that fig2epsi and have stored it in ~/mybin. All I have to do is: After inserting the latex stuff in the xfig figure, I save it. Then I run fig2epsi on that.
The same image used in the latex file works well.
That's all. Simple!
The solution was found in this website.
I have downloaded that fig2epsi and have stored it in ~/mybin. All I have to do is: After inserting the latex stuff in the xfig figure, I save it. Then I run fig2epsi on that.
The same image used in the latex file works well.
That's all. Simple!
Thursday, February 16, 2006
Getting The Hands Dirty
(excerpt from my talk given in TTT)
I am a student of software-engineering. In one way, I am speaking to you also as a representative of a large community of students of this subject. Software Engineering is a subject of a practical nature. It can't be learned by learning theories and methods without appreciating the scale, or at least the nature, of the problems of software engineering. One must get his hands dirty in encountering the practical problems, and if possible, to solve them in his own right. On the other hand, it's an oversimplification to say that taking a plunge into real-life industrial scale problem right after graduation can take the place of academic understanding. At best, it often creates cynical software engineers who have given in to the maxim that software-development is inherently a misery. They talk theory only to convince auditors that they deserve CMM Level 5 certification.
The real lessons of software-engineering are to be obtained in the academic environment which give an ample exposure to both theoretical and practical aspect of this difficult subject. Whether this environment is created in the universities or in the industry training rooms is besides the question. The key lies in the orientation and content of the course and the attitude of the instructor and the student alike.
Though my association with software engineering as a practice is now many years old, my introduction to it as an academic subject is new. In fact, I started studying software testing formally quite recently. Immediately after I got introduced to the very basic ideas of testing, the first thing I started craving for was to convince myself: of the fact that elaborate testing is indeed required. I knew that industrial projects are complex and are in dire need of automation, not just in testing, but in all stages of SDLC. I had seen for myself, that the amount of automation achieved in testing in most projects is dismal. I needed to see the utility at a scale where I could comprehend the need for testing, and test the automated methods that could be fruitfully applied at that level, at the same time, without getting overawed by the scale where these should actually be employed.
I started on my own to design a automatic testing system for a small software system I had built. The SUT was called mobilewp, a small emulation of the t9 dictionary in mobile phones. It displayed on the console the list of prefixes of candidate words that could be formed with the given set of keypad inputs. The system was 2000 lines of C code. It took me nearly a man-week to finish the implementation including the design of the test automation system.
The test harness was a very simple one. I wrote a bunch of test-cases, about 100 of them, manually. I wrote a small shell-script that invoked the mobilewp program with the input that I would provide it with. It would then display the output the system produced in response to that input. The result would then be saved in a designated file in a designated location as the expected output for that input. Of course, here the simplifying assumptions are that the system is correctly implemented at the time of creation; and that it's possible to find out whether that's indeed the case by manually inspecting the output to a given test-input. The second assumption was indeed true. The first one was also practically true with a bit of care taken in inspecting the outputs during the test-case generation. The criterion I used for test-case generation was `intuition!'
Then, I created another shell-script that played the role of the actual test-harness. Given a list of test-cases, it would pick those test-cases one by one from the prespecified location where the sample inputs were stored. It would feed them to the mobilewp program and dump the outputs into a predesignated output directory. All that done, it would finally diff between the expected outputs and the corresponding actual outputs. If no difference was found, the test-case passed. Else, it would be verdicted as failed.
A pretty trivial system it was. The test data was generated by -- as I mentioned -- mere intuition. The initial check done on what was the expected output too was done manually. The test verdict was passed by mere diff. However, there were some very good things about it. It took me less than a day's effort to write all the test cases. Though their generation was informal, it happened alongside the system development. This gave them an intuitive penetration that's possible only while the system is being developed; and the developer has the best idea at that time what's expected of the system, and what's right. The automatic execution of all those test cases takes just about a minute. At a point the test suite thus created was quite complete. During further few days' of development, the process of incorporating new features was comparably less painstaking than otherwise. It would take just a minute to run all the test cases automatically after incorporation of every new feature. Almost invariably at the first run a test case would fail. The set of failing test cases would easily give an insight as to what had gone wrong. It was quite easy! It worked! And it took just that getting hands dirty to get included in the league of supporters of automation of software-testing. It's not enough to show that it's required to do good testing; it's equally important to show that it pays to do so.
As a parting note, I would just like to point out that the above exercise qualifies as a black-box type of testing. The test data generation was manual. The test execution and evaluation of test results was automatic. The specification was not formal. It wasn't even informal. In fact it was implicit residing only in the mind.
Related blogs:
I am a student of software-engineering. In one way, I am speaking to you also as a representative of a large community of students of this subject. Software Engineering is a subject of a practical nature. It can't be learned by learning theories and methods without appreciating the scale, or at least the nature, of the problems of software engineering. One must get his hands dirty in encountering the practical problems, and if possible, to solve them in his own right. On the other hand, it's an oversimplification to say that taking a plunge into real-life industrial scale problem right after graduation can take the place of academic understanding. At best, it often creates cynical software engineers who have given in to the maxim that software-development is inherently a misery. They talk theory only to convince auditors that they deserve CMM Level 5 certification.
The real lessons of software-engineering are to be obtained in the academic environment which give an ample exposure to both theoretical and practical aspect of this difficult subject. Whether this environment is created in the universities or in the industry training rooms is besides the question. The key lies in the orientation and content of the course and the attitude of the instructor and the student alike.
Though my association with software engineering as a practice is now many years old, my introduction to it as an academic subject is new. In fact, I started studying software testing formally quite recently. Immediately after I got introduced to the very basic ideas of testing, the first thing I started craving for was to convince myself: of the fact that elaborate testing is indeed required. I knew that industrial projects are complex and are in dire need of automation, not just in testing, but in all stages of SDLC. I had seen for myself, that the amount of automation achieved in testing in most projects is dismal. I needed to see the utility at a scale where I could comprehend the need for testing, and test the automated methods that could be fruitfully applied at that level, at the same time, without getting overawed by the scale where these should actually be employed.
I started on my own to design a automatic testing system for a small software system I had built. The SUT was called mobilewp, a small emulation of the t9 dictionary in mobile phones. It displayed on the console the list of prefixes of candidate words that could be formed with the given set of keypad inputs. The system was 2000 lines of C code. It took me nearly a man-week to finish the implementation including the design of the test automation system.
The test harness was a very simple one. I wrote a bunch of test-cases, about 100 of them, manually. I wrote a small shell-script that invoked the mobilewp program with the input that I would provide it with. It would then display the output the system produced in response to that input. The result would then be saved in a designated file in a designated location as the expected output for that input. Of course, here the simplifying assumptions are that the system is correctly implemented at the time of creation; and that it's possible to find out whether that's indeed the case by manually inspecting the output to a given test-input. The second assumption was indeed true. The first one was also practically true with a bit of care taken in inspecting the outputs during the test-case generation. The criterion I used for test-case generation was `intuition!'
Then, I created another shell-script that played the role of the actual test-harness. Given a list of test-cases, it would pick those test-cases one by one from the prespecified location where the sample inputs were stored. It would feed them to the mobilewp program and dump the outputs into a predesignated output directory. All that done, it would finally diff between the expected outputs and the corresponding actual outputs. If no difference was found, the test-case passed. Else, it would be verdicted as failed.
A pretty trivial system it was. The test data was generated by -- as I mentioned -- mere intuition. The initial check done on what was the expected output too was done manually. The test verdict was passed by mere diff. However, there were some very good things about it. It took me less than a day's effort to write all the test cases. Though their generation was informal, it happened alongside the system development. This gave them an intuitive penetration that's possible only while the system is being developed; and the developer has the best idea at that time what's expected of the system, and what's right. The automatic execution of all those test cases takes just about a minute. At a point the test suite thus created was quite complete. During further few days' of development, the process of incorporating new features was comparably less painstaking than otherwise. It would take just a minute to run all the test cases automatically after incorporation of every new feature. Almost invariably at the first run a test case would fail. The set of failing test cases would easily give an insight as to what had gone wrong. It was quite easy! It worked! And it took just that getting hands dirty to get included in the league of supporters of automation of software-testing. It's not enough to show that it's required to do good testing; it's equally important to show that it pays to do so.
As a parting note, I would just like to point out that the above exercise qualifies as a black-box type of testing. The test data generation was manual. The test execution and evaluation of test results was automatic. The specification was not formal. It wasn't even informal. In fact it was implicit residing only in the mind.
Related blogs:
TTT - Talking to Teachers
Workshop on Industry Oriented Software EngineeringSunday, February 12, 2006
The Way to Take Backup!
I took backup of my important data yesterday. I did it in a seemingly roundabout way. I have a machine called karma. And my laptop is named pramaana. There were two purposes of the complete activity:
Some of the backed up data was initially in karma, but I did the roundabout thing of transferring them all to pramaana, then backing them up to karma again.
Well, that would appear a little roundabout. The idea was to create the landscape of data in pramaana first, then decide which one to backup. Now everytime I take backup, the process of taking backup is repeatable.
It didn't appear very straight to Karthik. To me, that was the way to go.
As I remarked, he's an optimisation guy; and anything non-optimal wouldn't appeal him. I am programming guy; and modularity is the thing for me.
We both are true to our trades! :)
- To take the backup of all the (important) data on my laptop on karma.
- To create some space in karma by shifting unnecessary stuff from it.
Some of the backed up data was initially in karma, but I did the roundabout thing of transferring them all to pramaana, then backing them up to karma again.
Well, that would appear a little roundabout. The idea was to create the landscape of data in pramaana first, then decide which one to backup. Now everytime I take backup, the process of taking backup is repeatable.
It didn't appear very straight to Karthik. To me, that was the way to go.
As I remarked, he's an optimisation guy; and anything non-optimal wouldn't appeal him. I am programming guy; and modularity is the thing for me.
We both are true to our trades! :)
Friday, February 10, 2006
Set Containing All Sets
Can there be a set containing all sets? No!
Seems it's a well-known theorem in set theory, but frankly, didn't seem intuitive in the beginning. I didn't know that any such theorem is there. However, on giving some serious thought, I hit upon many proofs, all of course based on some common line of thought.
That basic thing is: All sets have their power sets. Period.
OK! More directly it always means that given a set we can always compute a set which is bigger than itself. That is its power-set. A set containing all sets is the biggest possible set. But then, its power-set is bigger than it.
Hence proved!
Quite nice.
Seems it's a well-known theorem in set theory, but frankly, didn't seem intuitive in the beginning. I didn't know that any such theorem is there. However, on giving some serious thought, I hit upon many proofs, all of course based on some common line of thought.
That basic thing is: All sets have their power sets. Period.
OK! More directly it always means that given a set we can always compute a set which is bigger than itself. That is its power-set. A set containing all sets is the biggest possible set. But then, its power-set is bigger than it.
Hence proved!
Quite nice.
Programming -- Some Gyan
Here comes the most important part of programming. Good programming
is dependent on your taste, experience etc. And in the end it all boils down
to practice. That's why many in computer science think that programming
is essentially an art. Well, lots of people try to make the whole process of
programming very formal, so that it can be argued that programming is an
engineering activity, not an art (as if it's being an art is some kind of bad
thing!). But, I think we don't get into that. If it's an art then let it be. We
know well what's required to create good pieces of art.
One point to add to that is: the beauty thus created by this art of programming
is more pragmatic in nature than other creations of art. Accordingly,
this artist the programmer learns to measure the beauty of his
creation in terms of things like efficiency, changeability, portability and intuitive
appeal. If and as you get more experienced developing programming
you will develop a fairly good idea of what score high on the above points.
Really, I can't give a mathematical method of measuring that. Practise!
is dependent on your taste, experience etc. And in the end it all boils down
to practice. That's why many in computer science think that programming
is essentially an art. Well, lots of people try to make the whole process of
programming very formal, so that it can be argued that programming is an
engineering activity, not an art (as if it's being an art is some kind of bad
thing!). But, I think we don't get into that. If it's an art then let it be. We
know well what's required to create good pieces of art.
One point to add to that is: the beauty thus created by this art of programming
is more pragmatic in nature than other creations of art. Accordingly,
this artist the programmer learns to measure the beauty of his
creation in terms of things like efficiency, changeability, portability and intuitive
appeal. If and as you get more experienced developing programming
you will develop a fairly good idea of what score high on the above points.
Really, I can't give a mathematical method of measuring that. Practise!
TTT - Talking to Teachers
Today, I gave my scheduled lecture at TTT, the workshop on Industry oriented software engineering.. It went off well. I couldn't connect my laptop with the projector there as the connecting cord wouldn't go into the socket of my lappy. I think, I could've got it right, if I got some more time. But I wasn't slightly excited as the laptops were being hurriedly changed between the talks. I had to use the provided laptop there. It was disappointing! :(
The teachers of Engineering colleges are brighter and more enthusiastic than our impression is. They do have a sincere wish to do better than what's already been done. Especially for an orphaned subject like Software Engineering.
I got to meet and talk to many of them. They appeared to be quite well-informed with many of the trends. I was also strongly aware that their knowledge of textbook material is quite thorough, which in my case is almost zero. Whatever I have picked up is through my experience and struggle. Though that may've a spark of originality in that, it also has blemishes of confusions and ambiguity.
Well, the experience was good.
Related blog:
workshop on Industry oriented software engineering.
The teachers of Engineering colleges are brighter and more enthusiastic than our impression is. They do have a sincere wish to do better than what's already been done. Especially for an orphaned subject like Software Engineering.
I got to meet and talk to many of them. They appeared to be quite well-informed with many of the trends. I was also strongly aware that their knowledge of textbook material is quite thorough, which in my case is almost zero. Whatever I have picked up is through my experience and struggle. Though that may've a spark of originality in that, it also has blemishes of confusions and ambiguity.
Well, the experience was good.
Related blog:
workshop on Industry oriented software engineering.
Thursday, February 09, 2006
Talking with Young Students
I happened to meet two young students yesterday. They are MCA students from Dharwar. Meeting them was quite a strange experience.
They were here like hordes of other students – to look for a project for the their final semester project. I could see within minutes of meeting them that they are far from being in a position to do anything non-trivial. They were interested to do some project in software-engineering. Beyond the definitions of SDLC, their knowledge of it was absent. In fact they had come here with an intention of doing a project or programming language. This doesn't mean that they intended to work on the theory of programming languages. Their intention was solely to work on some kind of development in a programming language, preferably Java, or C# perhaps.
I really couldn't make myself advise them in doing a project with the main intention of learning a programming language. I could also sense that they weren't in a position to do any research oriented project. I asked them to look around in their own institute and see if there's anything in the process of their office administration, which has a scope of automation. It would give the necessary skill and experience in programming. And apart from that they would get to do something that would possibly be used by someone else, which is surely a proud and satisfying feeling.
They seemed to take the idea well. The overall experience was, infact, quite gloomy. But, I was suddenly flooded with a ray of hope when one of them asked: 'Just tell us what's the way to think in a right manner.' I thought he had already started thinking in the right way, by asking the right question! Yes. I told him so. And gave him a short lecture on how to drive himself to a state of endless curiosity, asking right question at the right time, and seeking their answers aggressively.
In the end, I felt that there was a smile the faces of all three of us. May be for different reasons!
They were here like hordes of other students – to look for a project for the their final semester project. I could see within minutes of meeting them that they are far from being in a position to do anything non-trivial. They were interested to do some project in software-engineering. Beyond the definitions of SDLC, their knowledge of it was absent. In fact they had come here with an intention of doing a project or programming language. This doesn't mean that they intended to work on the theory of programming languages. Their intention was solely to work on some kind of development in a programming language, preferably Java, or C# perhaps.
I really couldn't make myself advise them in doing a project with the main intention of learning a programming language. I could also sense that they weren't in a position to do any research oriented project. I asked them to look around in their own institute and see if there's anything in the process of their office administration, which has a scope of automation. It would give the necessary skill and experience in programming. And apart from that they would get to do something that would possibly be used by someone else, which is surely a proud and satisfying feeling.
They seemed to take the idea well. The overall experience was, infact, quite gloomy. But, I was suddenly flooded with a ray of hope when one of them asked: 'Just tell us what's the way to think in a right manner.' I thought he had already started thinking in the right way, by asking the right question! Yes. I told him so. And gave him a short lecture on how to drive himself to a state of endless curiosity, asking right question at the right time, and seeking their answers aggressively.
In the end, I felt that there was a smile the faces of all three of us. May be for different reasons!
Training The Trainers -- Infosys Workshop on Industry Oriented Software Engineering
Today I attended the first day of Training The Trainers, a workshop organised by Infosys for teachers of engineering colleges all over Karnataka. The theme of the workshop is `Industry oriented Software Engineering.'
Some stimulating talks were given in the morning. Prof. Srikant mentioned about the war that Industry and Academia always are in about which approach to learning computer science is right: Theory oriented foundational way as the academia would have it, or the industry oriented practical way as the Industry would have it. Obviously, as always such discussions will end with a suggestion to take some kind of elusive middle path. In reality, that only means that it's too early to end the war.
In spite of global competition, especially from China, Brazil, and many East European countries, India's prospects as a leading provider for Software services to the world seems to be optimistic, going by some authoritative report.
Infosys seems to be doing some cool job, and they seem to be having an upper hand in what they call the Global Delivery Model (GDM).
Out of all the talks, only two are from academia. One happened today by Dr. Deepak D'Souza on Verification. One is tomorrow. That's by me! :) I will speak about 'Specification Based Software Testing.'
Some stimulating talks were given in the morning. Prof. Srikant mentioned about the war that Industry and Academia always are in about which approach to learning computer science is right: Theory oriented foundational way as the academia would have it, or the industry oriented practical way as the Industry would have it. Obviously, as always such discussions will end with a suggestion to take some kind of elusive middle path. In reality, that only means that it's too early to end the war.
In spite of global competition, especially from China, Brazil, and many East European countries, India's prospects as a leading provider for Software services to the world seems to be optimistic, going by some authoritative report.
Infosys seems to be doing some cool job, and they seem to be having an upper hand in what they call the Global Delivery Model (GDM).
Out of all the talks, only two are from academia. One happened today by Dr. Deepak D'Souza on Verification. One is tomorrow. That's by me! :) I will speak about 'Specification Based Software Testing.'
Tuesday, February 07, 2006
JAVA ho java, aur vapas mat aava
Vinod Kumar B G wrote:
> In an interview a candidate was asked the question "Why We dont Have pointers in JAVA?", to which he replied like this:
>
>
>
> " I married a widow who had a grown-up daughter. My father, who visited us quite often, fell in love with my step daughter and married
> her. Hence, my father became my son-in-law, and my step-daughter became my mother. Some months later, my wife gave birth to a son, who
> became the brother in law of my father as well as my uncle. "
>
> " The wife of my father, that is my step daughter, also had a son. Thereby, i got a brother and at the same time a grandson. My wife is
> my grandmother, since she is my mother's mother. Hence, i am my wife's husband and at the same time her step-grandson; in other words, i am
> my own grandfather. "
>
> " I guess that's why we don't have pointer in Java..."
>
>
Well. In java you can still be your own grandson. Nothing stops you.
There's only one thing that happens in Java that doesn't happen in C,
C++. You are allowed to produce as many children as you want. And beyond
a point when they are useless to you, you may well throw them out of
your house. Some strange unknown being called the 'Garbage Collector'
takes them to some lonely place, and buries them, and they will never
come back to you many years later claiming to be your offspring.
So, that's indeed something great about Java: you create all the mess
you want to create, and forget about them when you are done. Someone
else takes care of that. That's Java!
Bye,
Sujit
> In an interview a candidate was asked the question "Why We dont Have pointers in JAVA?", to which he replied like this:
>
>
>
> " I married a widow who had a grown-up daughter. My father, who visited us quite often, fell in love with my step daughter and married
> her. Hence, my father became my son-in-law, and my step-daughter became my mother. Some months later, my wife gave birth to a son, who
> became the brother in law of my father as well as my uncle. "
>
> " The wife of my father, that is my step daughter, also had a son. Thereby, i got a brother and at the same time a grandson. My wife is
> my grandmother, since she is my mother's mother. Hence, i am my wife's husband and at the same time her step-grandson; in other words, i am
> my own grandfather. "
>
> " I guess that's why we don't have pointer in Java..."
>
>
Well. In java you can still be your own grandson. Nothing stops you.
There's only one thing that happens in Java that doesn't happen in C,
C++. You are allowed to produce as many children as you want. And beyond
a point when they are useless to you, you may well throw them out of
your house. Some strange unknown being called the 'Garbage Collector'
takes them to some lonely place, and buries them, and they will never
come back to you many years later claiming to be your offspring.
So, that's indeed something great about Java: you create all the mess
you want to create, and forget about them when you are done. Someone
else takes care of that. That's Java!
Bye,
Sujit
The definite strength of C (and probably the terrifying this as well)
>> is its pointer functionality. To imagine that a language of this sort
>> existed even about 30 years back is indeed remarkable. And not much
>> about C has changed.
>>
>> The only problem with using pointers in C is that one has to remember
>> to clear them up once they have been used. Not doing so can turn out to
>> be a nightmare. Worse, if you have problems with your code where you
>> have used pointers extensively and not got it working.
>>
>> But having said all this, its ideal to use a language like C to handle
>> huge amounts of data. given the speed of operation, I cannot imagine
>> JAVA finishing the processing in twice the time. when I was
>> interviewing with a company, I told them outright that I despised
>> programming in JAVA. I must have learned prior to going in there that
>> all their work was in JAVA. Its only obvious that I didnt get that job.
>>
>> Vinod, dont fret. They arent very difficult. Just nee! d to know the
>> right means of handling them. Dont give up!!
>>
>> A
SUJIT WROTE:
Abhinandan, I would say, you mustn't think badly of Java as bad because it
> doesn't have pointers. If you remember your lessons of programming
> languages, even functional and logic programming languages (ML, lisp...)
> have there automatic garbage collection. Java is an excellent language.
> It's clean, having a stricter type system, and a near complete library.
> The fact that it runs slower definitely can't be used against it for long.
> It scores over C++ in most points, not just from the point of view of the
> programmer, but even a programming language designer.
>
> The 'only problem' that you have pointed out about C (or C++) is a very
> big problem indeed! :)
>
> But, I also confess that I am still a C++ buff. But my reason doesn't have
> any rational basis. Apart from operator overloading and templates, I like
> it because C++ makes it more difficult for me by not cleaning the garbage
> I create. I have been continuously trying to find out patterns in coding
> that would prevent me from not cleaning up my garbage allocations, or
> would save me from dereferencing a null-pointer. I (like many others) have
> met with only a partial success. And it can't be denied that my ways won't
> scale to the kind of environments in which softwares are written: millions
> of lines of code, written by unknown predecessors, to fixed and delivered
> before the next Monday. I have painfully learned to accept that beyond a
> point, human brains can't handle all this. I accept that come a difficult
> enough situation I will quickly ditch my geeky ego and will switch to
> something that makes my job manageable. Even if it's Java. :)
Vinod Kumar B G wrote: