MRKAVANA (mrkavana@gmail.com) - www.facebook.com/kavanathai
Showing posts with label command. Show all posts
Showing posts with label command. Show all posts

Sep 8, 2011

How to use dstat to monitor your Linux/UNIX server


If you have a Linux server running at your office or at a data center for which you are responsible, you want to maintain an uptime of as close to a hundred percent. In such a case you want to make sure you keep an eye on how the system is running. To be precise you want to monitor all the system resources that contribute to the system running fine which then results in a high uptime. Memory, CPU, disk usage… are some of the things you want to observe. We would usually use a combination of the tools that come with a Linux or UNIX installation, such as “free”, “top”, “vmstat”… I’ll introduce you to a tool that gives you just about all the info that the other tools combined give you, all under one roof – Dstat. The developer of this command line tool, Dag Wieers, calls it “a versatile replacement for vmstat, iostat, netstat and ifstat”. He adds that “Dstat overcomes some of their limitations and adds some extra features…” To me Dstat is the mother of allcommand line system monitoring tools. It’s simple to install, easy to use, can be tweaked with ease, and it generates reports that you can plot as a graph to impress your boss.

Installing Dstat

Start by downloading the Dstat installer. Point your web browser to the Dstat project’s homepage – http://dag.wieers.com/home-made/dstat/. Scroll down to the section of the page where the downloads are listed. Pick the flavor of Linux on which you want to install the application and click on the download link. Now download the latest version of Dstat for the version of the Linux distribution you are running. I’ll show you how to do it for a RedHat Enterprise Linux version 4 machine:
# wget http://dag.wieers.com/rpm/packages/dstat/dstat-0.6.6-1.el4.rf.noarch.rpm
Now install Dstat:
# rpm -Uvh dstat-0.6.6-1.el4.rf.noarch.rpm
If the installation went though without errors, that’s it, you have Dstat installed and ready for use. If there were some dependencies that came up during the installation just install the required packages and try again. I don’t think that Dstat has too many dependencies, so you should not face any problems.

Using Dstat

With Dstat installed on your system you should be good to go. Begin by launching the command from a terminal:
# dstat
The output would be something like the following. Hit the key combination of ctrl+c to exit.
# dstat 
—-total-cpu-usage—- -disk/total -net/total- —paging– —system–
usr sys idl wai hiq siq|_read write|_recv _send|__in_ _out_|_int_ _csw_
7 1 91 1 0 0| 0 0 | 0 0 | 0 0 | 0 0
1 0 99 0 0 0| 0 0 | 0 0 | 0 0 |1051 1945
0 0 100 0 0 0| 0 12k|2269B 11.2k| 0 0 |1031 1923
1 0 99 0 0 0| 0 40k| 0 0 | 0 0 |1078 2235
0 0 100 0 0 0| 0 16k|6027B 21.5k| 0 0 |1008 2219
There are a number of options available for Dstat. As I mentioned earlier Dstat is quite easy to tweak. So if you want to limit the data reading to the CPU, disk, and network, run the following command:
# dstat -CDN
-disk/total -net/total- —paging– —system–
_read write|_recv _send|__in_ _out_|_int_ _csw_
0 0 | 0 0 | 0 0 | 0 0
0 12k|2295B 9603B| 0 0 |1053 1957
0 0 | 594B 0 | 0 0 |1002 1893
0 960k| 292B 3346B| 0 0 |1072 2012
0 4096B| 64B 0 | 0 0 |1031 1939
You can find more options in the application’s help document which you can access by entering the following:
# dstat -h
Play with the options a little so you get comfortable with them. The default interval between data reads is two seconds. You can change that interval if you need to. To increase the interval to ten seconds enter the following:
# dstat 10
Another useful feature is to get aggregated updates for each entry. So you can have Dstat give you an update every ten seconds in a new line, but as the data changes your line itself gets updated every second.
You might optionally want Dstat to give you five updates with a frequency of three seconds. Here’s how you would go about doing that:
# dstat 3 5
—-total-cpu-usage—- -disk/total -net/total- —paging– —system–
usr sys idl wai hiq siq|_read write|_recv _send|__in_ _out_|_int_ _csw_
3 1 95 2 0 0| 0 0 | 0 0 | 0 0 | 0 0
2 11 87 1 0 0| 0 180k|2581B 3239B| 0 0 |1136 697
7 8 85 0 0 0| 0 0 | 115k 106k| 0 0 |1603 3985
2 1 98 0 0 0| 0 0 |77.5k 170k| 0 0 |1744 3856
1 0 100 0 0 0| 0 0 |3451B 9993B| 0 0 |1045 149
2 1 94 4 0 0| 0 276k|20.4k 31.1k| 0 0 |1219 804
The feature that I find most useful is the output. Dstat allows you to have the output of a session written into a comma separated file which can later be imported into a spreadsheet application such as Microsoft Excel and the data can be plotted into a graph. Here is how it can be used.
# dstat –output /tmp/dstat_data.csv -CDN
Let the above command run while you run your applications or do your testing on this machine. Hit the key combination of ctrl+c when you are done. Open the filedstat_data.csv with a spreadsheet application such as Microsoft Excel or Open Office. You can then select the columns you want plotted into a graph and let your spreadsheet application do the magic.

Get a Report by Mail

There may be cases wherein you want to observe how your server is performing over a period of time. You can setup a background process in Linux that will give you a reading with a certain interval, generate a report, and mail out the file to you. This can be especially useful during a stress-test. Here’s how you could do that. The following script will run Dstat for three hours, reading the data every 30 seconds, and will mail out the report to me@myemailid.com.
#!/bin/bash
dstat –output /tmp/dstat_data_mail.csv -CDN 30 360
mutt -a /tmp/dstat_data_mail.csv -s “Dstat Report for 3 hour run” me@myemailid.com < /dev/null
Save the above script in a file called dstat_script.sh on your server, give it executable permission and then run it as a background process:
# chmod +x dstat_script.sh
# nohup ./dstat_script.sh &
Done. Yes, it’s that simple. You will now be mailed this report after it’s done running. You can optionally schedule this script as a daily cron job so that you can receive this data every day.
As you may already have realized Dstat is a wonderful tool when applied to performance monitoring and debugging. The granularity, frequency, and nature of the data collected are completely up to you. Dstat empowers you to know what is going on on your server. Used wisely this power can result in a lot less sleepless nights for you.

How to download files from the Linux command line


Wget is a very cool command-line downloader for Linux and UNIX environments. Don’t be fooled by the fact that it is a command line tool. It is very powerful and versatile and can match some of the best graphical downloaders around today. It has features such as resuming of downloads, bandwidth control, it can handle authentication, and much more. I’ll get you started with the basics of using wget and then I’ll show you how you can automate a complete backup of your website using wget and cron.
Let’s get started by installing wget. Most Linux distributions come with wget pre-installed. If you manage to land yourself a Linux machine without a copy of wget try the following. On a Red Hat Linux based system such a Fedora you can use:
yum install wget
or if you use a Debian based system like Ubuntu:
sudo apt-get install wget
One of the above should do the trick for you. Otherwise, check with your Linux distribution’s manual to see how to get and install packages. wget has also been ported to Windows. Users on Windows can access this website. Download the following packages: ssllibs and wget. Extract and copy the files to a directory such as C:\Program Files\wget and add that directory to you system’s path so you can access it with ease. Now you should be able to access wget from your Windows command line.
The most basic operation a download manager needs to perform is to download a file from a URL. Here’s how you would use wget to download a file:
wget http://www.sevenacross.com/photos.zip
Yes, it’s that simple. Now let’s do something more fun. Let’s download an entire website. Here’s a taste of the power of wget. If you want to download a website you can specify the depth that wget must fetch files from. Say you want to download the first level links of Yahoo!’s home page. Here’s how would do that:
wget -r -l 1 http://www.yahoo.com/
Here’s what each options does. The -r activates the recursive retrieval of files. The -lstands for level, and the number 1 next to it tells wget how many levels deep to go while fetching the files. Try increasing the number of levels to two and see how much longer wget takes.
Now if you want to download all the “jpeg” images from a website, a user familiar with theLinux command line might guess that a command like “wget http://www.sevenacross.com*.jpeg” would work. Well, unfortunately, it won’t. What you need to do is something like this:
wget -r -l1 –no-parent -A.jpeg http://www.sevenacross.com
Another very useful option in wget is the resumption of a download. Say you started downloading a large file and you lost your Internet connection before the download could complete. You can use the -c option to continue your download from where you left it.
wget -c http://www.sevenacross.com/ubuntu-live.iso
Now let’s move on to setting up a daily backup of a website. The following command will create a mirror of a site in your local disk. For this purpose wget has a specific option, –mirror. Try the following command, replacing http://sevenacross.com with your website’s address.
wget –mirror http://www.sevenacross.com/
When the command is done running you should have a local mirror of your website. This make for a pretty handy tool for backups. Let’s turn this command into a cool shell scriptand schedule it to run at midnight every night. Open your favorite text editor and type the following. Remember to adapt the path of the backup and the website URL to your requirements.
#!/bin/bash
YEAR=`date +”%Y”`
MONTH=`date +”%m”`
DAY=`date +”%d”`
BACKUP_PATH=`/home/backup/` # replace path with your backup directory
WEBSITE_URL=`http://www.sevenacross.net` # replace url with the address of the website you want to backup
# Create and move to backup directory
cd $BACKUP_PARENT_DIR/$YEAR/$MONTH
mkdir $DAY
cd $DAY
wget –mirror ${WEBSITE_URL}
Now save this file as something like website_backup.sh and grant it executable permissions:
chmod +x website_backup.sh
Open your cron configuration with the crontab command and add the following line at the end:
0 0 * * * /path/to/website_backup.sh
You should have a copy of your website in /home/backup/YEAR/MONTH/DAY every day. For more help using cron and crontab, see this tutorial.
There’s a lot more to learn about wget than I’ve mentioned here. Read up wget’s man page.