Wednesday, December 12, 2007

Generating executable simulations with OPNET (and why)

First of all, why generating OPNET static executables? There are a lot of possible reasons, first of all for distributing simulations, but in my experience the most interesting one is that you can run memory debuggers (such as valgrind) or debuggers such as gdb against static OPNET executables, thus easily spotting many common bugs.
These notes refer to OPNET version 11.5 (I still have to upgrade to the latest version), on linux.

The procedure:
  1. open the scenario you want to compile
  2. click on "run simulation", this will open the simulation dialog, click on "simulation set info"
  3. here you will find the command opnet will execute to launch your simulations (something like op_runsim ...) - copy it, paste it in a text editor, delete all carriage returns so that it is on a single line
  4. in the command, substitute the word op_runsim with op_mksim
  5. open a shell (bash in linux, cmd in windows), go to your project's directory, and execute the command you obtained (to give an example, op_mksim -opnet_user_home . -net_name sample -noprompt -kernel_type development could be such a command)
  6. opnet will start compiling and linking your simulations, in the last lines of output it will tell you the name of the produced executable (e.g., something like op_mksim: Simulation (./op_models/sample.dev32.i1.sim) Produced)
  7. to execute it, substitute its name to "op_mksim" and start the simulation (example: ./op_models/sample.dev32.i1.sim -opnet_user_home . -net_name sample -noprompt -ef sample)
Everything should go fine. In my case, the executable complains and I have to add the option "-opnet_dir /opt/opnet", although it shouldn't be as this variable is set in my opnet preferences. If you have the same problem, substitute /opt/opnet with your opnet installation path.
Next time I'll also show some example of how to use valgrind against opnet executables, although that is quite straightforward.

Tuesday, December 4, 2007

Script: monitoring processes on multiple hosts

Today I'm posting a simple script, very useful when you have multiple machines accessible via ssh and you want to know if these machines are busy, and what are the most CPU-intensive processes at a given moment.

Simply set the variable $HOSTS_TO_MONITOR to the list of hosts you want to monitor, this way:

export HOSTS_TO_MONITOR="host1 host2 host3 host4"

and call the script.

The script uses ssh to log into each host. I strongly suggest you to use password-less authentication for this purpose, so that you don't have to type the password for any given host.

Download the script from here

Thursday, November 29, 2007

new kernel module for WG111 v2 wireless card

After some time I come back to write something on this blog, hoping that this information can be useful...
I recently found that, with the latest linux kernel from http://www.kernel.org, a new kernel module is shipped, named rtl8187.
This is a driver to let many network cards work, including the WG111v2 that I bought sometimes ago - no more need for ndiswrapper (of course, thanks to the developers for their good work)!
On opensuse, the kernel is still not available, but you can compile and use it as usual. Remember to activate the option CONFIG_RTL8187=m in the config file. The next time I inserted the network card in my USB port, the driver has been automatically loaded and the network card has been automatically configured, according to the network settings I already used with ndiswrapper.
I also upgraded the wireless-tools version, although I still use 10.2 I downloaded the latest 10.3 rpm from http://software.opensuse.org and installed it.

And everything works fine, up to now!

Tuesday, September 4, 2007

OPNET - the license file and Linux

I'm writing this post on OPNET in order to share another interesting tip with this blog's readers.
In this page I wrote before opening this blog, you will find an interesting tip to solve license problems with OPNET when using it on Linux.
Read it and, feel free to comment directly on this blog! Maybe sometimes I'm going to definitely move it here and put a link from there...

Wednesday, August 22, 2007

an example for inotifywait...

inotify is a linux kernel feature, introduced recently, mainly to support desktop search produtcs like beagle.
This provides a mechanism for notificating processes of file system events. As I haven't seen many examples of how to use it, I'm writing this post to show you how you can use it in makefiles.
First of all, I use the inotifytools, downloadable http://inotify-tools.sourceforge.net/, and available for many distributions. Using these tools, I write makefiles that automatically trigger the compilation as soon as some file changed.
So, if I have a target in my makefile like:
output: file1 file2
gcc -o output file1 file2

I'll then write a line with something like:
cdeps = file1 file2
that is, a variable containing all files that my target depends on, again.
And then, some other line:
inotifywait:
while( true ); do inotifywait -e MOVE_SELF -e modify $(cdeps); make output; done


So, with these simple modifications, when you call:
make inotifywait
The command will wait for an event, when it reaches the command inotifywait. If you make some modification to some of your files, inotifywait will exit and then you'll have your event triggered, and in this case the event is make output.
This is the whole trick, quite useful when you're used to edit a file, change to a shell, compile, check the output and so on...

Wednesday, August 8, 2007

Monitoring your hosts with monit

I am writing this post mainly to publish a very small perl script I wrote to check the health of some server I own.
The script is very simple but useful, it generates output suitable for monit. There are many articles and reviews about this software, so I'm not writing anything else about it here, except that I like it very much because it is exactly the thing you need when you manage many servers.
Run the script by writing on the command line all hosts you want to check, either their names or IP addresses. It will generate a monit.d folder, containing a file for each host. These files contains all the information needed to check that their ssh port is accessible. Copy these file in /etc/monit.d, remember to add
include /etc/monit.d/*
to your /etc/monitrc file, and then start (or restart) monit itself.
If you give a look at the script, there's a line commented in it to enable also ICMP echo checks, in case you need it.
The script can be downloaded here.

Thursday, August 2, 2007

Using Makefiles with OPNET

With this second post on OPNET I want to share a small script that I wrote to generate Makefiles to compile OPNET simulations.
The script is named makefile_gen.pl, it simply scans your op_models directory and outputs (on standard output) lines suitable to be put in a makefile.
So, if you want to generate a makefile for an optimized simulation, simply type:
$> cd
$> perl makefile_gen.pl 1 > opt_makefile

then, you can:
$> make -f opt_makefile dep
to generate dependencies, and
$> make -f opt_makefile

to compile all files. After this, the simulation will be linked only when you start it, either from modeler or from the command line. But with this small script, you have the advantage of compiling all the files that need be recompiled, and no more. And also, if you have more than one core/cpu, you can use "make -j 2" for instance, to enable parallel compilation.

You can download the script from here.
It has been tested using Linux, but should work on other platforms with some minor modification.

Wednesday, July 18, 2007

Using OPNET with ccache (on Linux)

I use the OPNET network simulator for research purposes. Its default compilation process compiles the files using gcc (default on Linux), one by one, with a very poor dependency check, so that many times you prefer to compile with "force model recompilation" to be sure.
To speed-up this compilation process for my simulations I use a series of commonly used tools, such as ccache (a compiler cache), make (to recompile only what needs to recompiled) and icecream (to make use of other machines in order to parallelly compile many files). In this first part I'm writing down the simpler thing, that is, how to use ccache.

This is very straightforward. First of all, install ccache. Then, use one of the methods suggested in the ccache documentation. Supposing you're user dev, I prefer installing ccache like this:
$> which ccache # discover where ccache is, suppose /usr/bin/ccache
$> ln -s /usr/bin/ccache /bin

now, call
$> ccache -s
to view ccache statistics, then try to compile something in OPNET and run ccache -s again. You should see some change in the statistics...
Well, so we saw that using ccache is very simple, but you'll see that's very effective if you're used to use "force model recompilation" to avoid dependency problems. Using icecream and Makefiles will be a little more tricky, but if you're interested you'd better follow this blog!

Tuesday, July 10, 2007

Browsing your phone with KDE

This time I'm going to write down how you can access your phone's folders using KDE. These notes are related to opensuse 10.2 with kde 3.5.7, but I used the same method also on other linux versions and this is quite straightforward. The telephone I used is a nokia 3200, with infrared connectivity (it implements the obex protocol, very common in telephones)

The first method you can use involves a KDE component. On SuSE, you must have kdebluetooth installed, in order to have the component kio_obex in kde's library folder.
You must also configure the IRDA service (/etc/init.d/irda on SuSE), this is a different topic and there are many other useful readings on the internet to do it, so for now I'll skip this step.
Now, in KDE, all you have to do is enable infrared in your phone, put it near the infrared port of your PC, and then open a konqueror window to digit this URL:
obex2://irda

(depending on your packages, it could also be obex://irda)
and you'll get full access to the user folders of your phone! At least on mine, I can access all the folders containing images and tones, depending on your phone maybe you can also access games folders and so on...

The second method is by using obexfs. This requires also installing fuse, both these packages are very easy to install on opensuse.
With this method, all you have to do is create a directory to mount your phone, e.g. /mnt/phone, and then type
obexfs -i /mnt/phone
(-i is for IRDA, with -b you'll easily access also bluetooth phones) and then browse the folder. Remember to unmount the phone when finished! To do this, type:
fusermount -u /mnt/phone

and that's it, I hope this little article can be useful!

Thursday, July 5, 2007

GNU Octave Compilation

I tried to compile GNU Octave by myself, in order to take advantage of the improved performance that can be obtained by using the optimized ATLAS libraries (see the links if you want to know what I'm speaking about).
I personally found it uneasy to understand what the correct procedure is, so I'm writing here the steps that I followed in order to get the whole thing done.
First, download the latest versions of both octave and ATLAS, and uncompress them as usual.
Then, compile ATLAS (this is straightforward as from the instructions provided with it).
Last, the thing that I found a little tricky to understand. Provided you compiled ATLAS for an architecture named Linux_SOMEARCH, and you're doing everything in a folder /root/octave, you should call configure like this:

./configure --enable-shared --enable-dl --disable-static LDFLAGS="-L/root/octave/ATLAS/lib/Linux_SOMEARCH"

It is very important that the library location specified with LDFLAGS be absolute rather then relative
, as I tried with a relative path and the compilation was broken by this. Add to configure any other option you should use, and then compile everything so that you can enjoy the improved performance!
Finally, notice that this refers to GNU Octave version 2.1.73 (the latest stable version at the time of this post)

My first post

Hi to all my readers!
I opened this blog to collect and publish everything I find during my work, so that everyone can possibly be helped by me sharing my experience! Enjoy!