MRKAVANA (mrkavana@gmail.com) - www.facebook.com/kavanathai

Aug 25, 2011

Loadbalancing Multiwan by Linux

How to: Loadbalancing & Failover with Dual multiwan / adsl / cable connections on Linux


In many location, including but definitely not limited to India, single ADSL / Cable connections can be unreliable and also may not provide sufficient bandwidth for your purposes. One way to increase reliability and bandwidth of your internet connection is to distribute the load (load balancing) using multiple connections. It is also imperative to have transparent fail-over so routes are automatically adjusted depending on the availability of the connections. With load balancing and fail-over you can have reliable connectivity over two or more unreliable broadband connections (like BSNL or Tata Indicom in India). I present you with the simplest solution to a complex problem with live examples.
Note: Load balancing doesn’t increase connection speed for a single connection. Its benefits are realized over multiple connections like in an office environment. The benefits of fail-over are however realized even in a single user environment.
The load balancing mechanism, to be discussed with example below, in Linux caches routes and doesn’t provide transparent fail-over support. There are two solutions to incorporate transparent fail over – 1. compiling and using a custom Linux kernel with Julian Anastasov’s kernel patches for dead gateway detection or 2. user space script to monitor connections and dynamically change routing information.
Julian Anastasov’s patches have two problems:
1. They work only when the first hop gateway is down. In many cases, including ours, the first hop gateway is the adsl modem cum router which is always up. So we need a more robust solution for our purposes.
2. You have to compile a custom kernel with patches. This is somewhat complex procedure with reasonable chances of screwing up something. It also forces you to re-patch the kernel every time you decide to update your kernel. Overall I wouldn’t recommend anyone going for kernel patching route unless that is the only option. Also in that case you should look for a rpm based solution (like livna rpm for nVidia drivers) which does it automatically for you.
A better solution is to use a userspace program which monitors your connection and updates routes as necessary. I will provide a script which we use to constantly monitor our connections. It provides transparent fail over support with two ADSL connections. It is fully configurable and can be used for any standard dual ADSL / Cable connections to provide transparent fail over support. It can also be easily modified to use for more than two connections. You can also use it to log uptime / downtime of your connections like we did.
Let’s first discuss load balancing with two ADSL / Cable connections and then we will see how to provide transparent fail-over support. The ideas and script provided here can be easily used for more than two connections with minor modifications.

Requirements for Load Balancing multiple ADSL / Cable Connections

1. Obviously you need to have multiple (A)DSL or Cable connections in the first place. Login as root for this job.
2. Find out the LAN / internal IP address of the modems. They may be same like 1921.168.1.1.
Check if the internal / LAN IP address of both (or multiple) modems are same. In that use the web / telnet interface of the modems to configure one of the modems to have a different internal IP address preferably in different networks like 192.168.0.1 or 192.168.2.1 etc. If you are using multiple modems then you should configure each of them to have different subnets. This is important because now you can easily access the different modems from their web interface and you don’t have to bother connecting to a modem through a particular interface. It is also important because now you can easily configure the interfaces to be associated with different netmasks / sub-network.
3. Connect each modem to the computer using a different interface (eth0, eth1 etc.). You may be able to use the same interface but this guide doesn’t cover that. In short you will make your life complicated using the same interface or even different virtual interface. My recommendation is that you should use one interface per modem. Don’t scrimp on cheap ethernet adapters. This has the added benefit of redundancy should one adapter go bad down the road.
4. Configure the IP address of each interface to be in the same sub-network as the modem. For example my modems have IP addresses of 192.168.0.1 and 192.168.1.1. The corresponding addresses & netmasks of the interfaces are: 192.168.0.10 (netmask: 255.255.255.0) and 192.168.1.10 (netmask: 255.255.255.0).
5. Find out the following information before you proceed with the rest of the guide:
  1. IP address of external interfaces (interfaces connected to your modems). This is not the gateway address.
  2. Gateway IP address of each broadband connections. This is the first hop gateway, could be your DSL modem IP address if it has been configured as the gateway following the tip below.
  3. Name, IP address & netmask of external interfaces like eth1, eth2 etc. My external interfaces are eth1 & eth2.
  4. Relative weights you want to assign to each connection. My Tata connection is 4 times faster than BSNL connection. So I assign the weight of 4 to Tata and 1 to BSNL. You must use low positive integer values for weights. For same connection speeds weights of 1 & 1 are appropriate. The weights determine how the load is balanced across multiple connections. In my case Tata is 4 times as likely to be used as route for a particular site in comparison with BSNL.
Note: Refer to Netmask guide for details on netmasks.
Optional step
Check the tips on configuring (A)DSL modems. They are not required for using this guide. However they are beneficial in maximizing your benefits.

How to setup default load balancing for multiple ADSL / Cable connections

Unlike other guides on this topic I will use a real example – the configuration on our internal network. So to begin with here are the basic data for my network:
#IP address of external interfaces. This is not the gateway address.
IP1=192.168.1.10
IP2=192.168.0.10
#Gateway IP addresses. This is the first (hop) gateway, could be your router IP
#address if it has been configured as the gateway
GW1=192.168.1.1
GW2=192.168.0.1
# Relative weights of routes. Keep this to a low integer value. I am using 4
# for TATA connection because it is 4 times faster
W1=1
W2=4
# Broadband providers name; use your own names here.
NAME1=bsnl
NAME2=tata
You must change the example below to use your own IP addresses and other details. Even with that inconvenience a real example is much easier to understand than examples with complex notations. The example given below is copy-pasted from our intranet configuration. It works perfectly as advertised.
Note: In this step fail-over is not addressed. It is provided later with a script which runs on startup.
First you need to create two (or more) routes in the routing table ( /etc/iproute2/rt_tables ). Open the file and make changes similar to what is show below. I added the following for my two connections:

1 bsnl
2 tata
To add a default load balancing route for our outgoing traffic using our dual internet connections (ADSL broadband connections from BSNL & Tata Indicom) here are the lines I included in rc.local file:

ip route add 192.168.1.0/24 dev eth1 src 192.168.1.10 table bsnl
ip route add default via 192.168.1.1 table bsnl
ip route add 192.168.0.0/24 dev eth2 src 192.168.0.10 table tata
ip route add default via 192.168.0.1 table tata
ip rule add from 192.168.1.10 table bsnl
ip rule add from 192.168.0.10 table tata
ip route add default scope global nexthop via 192.168.1.1 dev eth1 weight 1 nexthop via 192.168.0.1 dev eth2 weight 4
Adding them to rc.local ensures that they are execute automatically on startup. You can also run them manually from the command line.
This completes the load balancing part. Let’s now see how we can achieve fail-over so the routes are automatically changed when one or more connections are down and then changed again when one or more more connections come back up again. To do this magic I used a script.

How to setup fail-over over multiple load balanced ADSL / Cable connections

Please follow the steps below and preferably in the same order:
  1. First download the script which checks for and provides fail-over over dual ADSL / Cable internet connections and save it to /usr/sbin directory (or any other directory which is mounted available while loading the OS).
  2. Change the file permissions to 755:
    chmod 755 /usr/sbin/gwping
  3. Open the file (as root) in an editor like vi or gedit and edit the following parameters for your environment:
    #IP Address or domain name to ping. The script relies on the domain being pingable and always available
    TESTIP=www.yahoo.com
    #Ping timeout in seconds
    TIMEOUT=2
    # External interfaces
    EXTIF1=eth1
    EXTIF2=eth2
    #IP address of external interfaces. This is not the gateway address.
    IP1=192.168.1.10
    IP2=192.168.0.10
    #Gateway IP addresses. This is the first (hop) gateway, could be your router IP
    #address if it has been configured as the gateway
    GW1=192.168.1.1
    GW2=192.168.0.1
    # Relative weights of routes. Keep this to a low integer value. I am using 4
    # for TATA connection because it is 4 times faster
    W1=1
    W2=4
    # Broadband providers name; use your own names here.
    NAME1=BSNL
    NAME2=TATA
    #No of repeats of success or failure before changing status of connection
    SUCCESSREPEATCOUNT=4
    FAILUREREPEATCOUNT=1
    Note: Four consecutive success indicates that the gateway is up and one (consecutive) failure indicates that the gateway went down for my environment. You may want to modify it to better match your environment.
  4. Add the following line to the end of /etc/rc.local file:
    nohup /usr/sbin/gwping &
In the end my /etc/rc.local file has the following lines added in total:
ip route add 192.168.1.0/24 dev eth1 src 192.168.1.10 table bsnl
ip route add default via 192.168.1.1 table bsnl
ip route add 192.168.0.0/24 dev eth2 src 192.168.0.10 table tata
ip route add default via 192.168.0.1 table tata
ip rule add from 192.168.1.10 table bsnl
ip rule add from 192.168.0.10 table tata
ip route add default scope global nexthop via 192.168.1.1 dev eth1 weight 1 nexthop via 192.168.0.1 dev eth2 weight 4
nohup /usr/sbin/gwping &
An astute reader may note that the default setup with dual load balanced routing (7th line) is really not required as the script is configured to force routing based on the current status the very first time. However it is there to ensure proper routing before the script forces the routing for the first time which is about 40 seconds in my setup (can you tell why it takes 40 second for the first time?).
Concluding thoughts
In the process of finding and coding the simple solution above, I read several documents on routing including the famous lartc how-to (many of whose commands didn’t work as described on my Fedora Core system) & nano.txt among several others. I think I have described the simplest possible solution for load balancing and transparent failover of two or more DSL / Cable connections from one or more providers where channel bonding is not provided upstream (requires cooperation from one or more DSL providers); which is the most common scenario. I would welcome suggestions and improvements to this document.
The solution has been well tested in multiple real and artificial load condition and works extremely well with users never realizing when a connection went down or came back up again.
Networking is a complex thing and it is conceivable that you may run into issues not covered here. Feel free to post your problems and solutions here. However, while I would like to, I will not be able to debug and solve individual problems due to time constraints.
I may however be able to offer useful suggestions to your unique problems. It may however be noted that I respond well to Café Estima Blend™ by Starbucks and move much quicker on my todo list. It is also great as a token of appreciation for my hard work. The “velvety smooth and balanced with a roasty-sweet flavor this blend of coffees is a product of the relationships formed between” us.
fonte: http://blog.taragana.com/index.php/archive/how-to-load-balancing-failover-with-dual-multi-wan-adsl-cable-connections-on-linux/
------------------------------------------------------
Or you can install & use PFENSE that builded on base FreeBDS UNIX.
I used 'PFSENSE on Pentium 4 PC + Squid Proxy Caching + Iptables Firewall' for over 200 users
Please see PFSENSE in here: http://www/pfsense.org
PFSENSE's very good (recommended):



Script to email successful Ftp logins


Shell Script to email Successful Ftp Logins.
This Shell script will search the server logs on daily basis and will email you the successful Ftp Logins of the day. The ftp logs are saved in the /var/log/messages file as by default there is no separate log file for Ftp in Linux.
Create a file /home/script/ftplogins.sh and paste the below code:
#!/bin/bash
#Retrieve the current date
CUR_DATE=`date +”%b %e”`
#Create a temporary file to store the logs
touch /tmp/out.txt

echo “List Follows” > /tmp/out.txt
#Search the successful attempts and save in the temporary file
/bin/grep “$CUR_DATE” /var/log/messages | grep pure-ftpd | grep logged >> /tmp/out.txt
#Email the contents of the file to your email address
/bin/mail -s “Successful Ftp Login Attempts on ‘$CUR_DATE’” youremail@yourdomain.com < /tmp/out.txt
Save the file. You now have to schedule a cron to execute the file once in a day to search logs. Edit the cron file
crontab -e
and add the following cron job
59 23 * * * /bin/sh /home/script/ftplogins.sh
Note:
1) This script will work with Pure-Ftpd server. You will have to edit the search string a bit according to your Ftp server.
2) If you copy/paste the script as it is in shell, the single and double quotes may change to dots (.) so make sure you correct them before executing the script.

Shell Script to Monitor Load Average on a Linux server


Load Average on a server reflects the current state of the server. Higher the load average, poorer is the server performance hence it is a necessity to monitor the load average on the server. The following shell script monitors the load average on the Linux server and inform the server administrator with the current running processes of the server if the load average is greater than the defined threshold.
Create a file, say, /root/monit_loadaverage.sh and paste the following script in it:
############### START OF THE SCRIPT ###############
#!/bin/bash
# Define Variables
CUR_TIME=`date +"%A %b %e %r"`
HOSTNAME=`hostname`
# Retrieve the load average of the past 1 minute
Load_AVG=`uptime | cut -d'l' -f2 | awk '{print $3}' | cut -d. -f1`
LOAD_CUR=`uptime | cut -d'l' -f2 | awk '{print $3 " " $4 " " $5}' | sed 's/,//'`
# Define Threshold. This value will be compared with the current load average.
# Set the value as per your wish.
LIMIT=5
# Compare the current load average with the Threshold and
# email the server administrator if the current load average is greater.
if [ $Load_AVG -gt $LIMIT ]
then
#Save the current running processes in a file
/bin/ps -auxf >> /root/ps_output
echo "Current Time :: $CUR_TIME" >> /tmp/monitload.txt
echo "Current Load Average :: $LOAD_CUR" >> /tmp/monitload.txt
echo "The list of current processes is attached with the email for your reference." >> /tmp/monitload.txt
echo "Please Check... ASAP."  >> /tmp/monitload.txt
# Send an email to the administrator of the server
/usr/bin/mutt -s "ALERT!!! High 1 minute load average on '$HOSTNAME'" -a /root/ps_output youremail@yourdomain.tld < /tmp/monitload.txt
fi
# Remove the temporary log files
/bin/rm -f /tmp/monitload.txt
/bin/rm -f /root/ps_output
############### END OF THE SCRIPT ###############
Now, schedule a cronjob to execute the script on per minute basis. Edit the cronjob file
# crontab -e
and place the following cronjob at the end of the file
* * * * * /bin/sh /root/monit_loadaverage.sh
restart the crond service
# service crond restart
In order to use “mutt” to send emails, you need to install the mutt package on the server. It allows you to send emails with attachments.
# yum install mutt
Note: Please place a comment if you receive any error message while executing this script OR you need some modifications in the script.

Howto install a PHP FileInfo module in Linux?


The steps to install the PHP ‘Fileinfo’ module on a Linux server is as below:
1) Download and untar the package
# wget http://pecl.php.net/get/Fileinfo-1.0.4.tgz
# tar -zxf Fileinfo-1.0.4.tgz
# cd Fileinfo-1.0.4
2) Generate the extension for compiling
# phpize
3) Configure the module
# ./configure
4) generate the install files and install it
# make
# make install
5) Once done, the extension will be available under the /usr/lib64/php/modules directory.
You now need to add the extension somewhere in the php configuration file. Edit /etc/php.ini and add the following:

extension=fileinfo.so
6) Save the file and restart the webserver
# service httpd restart
To check if “fileinfo” is enabled on the server, execute:
# php -i | grep fileinfo
fileinfo
fileinfo support => enabled
Alternate method
Just an FYI, the module can also be installed using the PECL command i.e.
# pecl install fileinfo
Once done, just follow steps 5 and 6 mentioned above. That’s it.

How to install a Perl Module in Linux?


There are various ways to download and install perl modules from CPAN. In general it’s a good idea to download and install the perl module but we will see the different ways of installing the perl module.
1. The easiest way is to use the perl CPAN module. SSH to the server as root and execute:
# cpan
if you are running this for the first time, it will prompt you for a few questions (default answers are fine) before providing you with the “cpan >” prompt. To install a module, say for example “Perl::OSType”, execute
cpan > install Perl::OSType
this will download and compile the module and install it server wide. To know more commands/options of cpan, type question mark ( ? ) at the cpan prompt.
2. The second and the quickest method is to use perl CPAN module from the bash prompt instead of ‘cpan’ prompt. If you are on the command line of Linux, just execute
# perl -MCPAN -e 'install Perl::OSType'
3. The above 2 methods are the easiest one but it is recommended to install the module manually as the above methods may not always work. You can search the module at http://search.cpan.org/ and then wget it on your server. Once done, extract it:
# tar -zxf Perl-OSType-1.002.tar.gz
There is a standard way of installing the perl module, however, you will see a README file inside the extracted directory containing the installation steps. Goto the extracted directory and execute
# perl Makefile.PL
# make
# make test
# make install

How to install PDFlib-Lite & PDFlib on a RedHat/CentOS server?


PDFlib is a free library used for generating and manipulating files in Portable Document Format (PDF). The primiary goal of PDFlib is to create dynamic PDF documents on a Web server OR similar systems and to allow a “save as PDF” capability.
The following steps will help you to install PDFlib-lite and PDFlib on a CentOS server OR even on a cPanel and Plesk servers. SSH to the server as user ‘root’:
1) Download the PDFlib-Lite package required for PDFlib installation in a temporary directory
# cd /usr/local/src
# wget http://www.pdflib.com/binaries/PDFlib/705/PDFlib-Lite-7.0.5.tar.gz
2) Unpack the package and goto the PDFlib-Lite directory
# tar -zxf PDFlib-Lite-7-*
# cd PDFlib-Lite-7.0*
3) Now, configure PDFlib-Lite
# ./configure --prefix=/usr/local
4) Create the installation files and install PDFlib-Lite
# make
# make install
5) Once PDFlib-Lite is installed, download ‘PDFlib’ using pecl.
# pecl download pdflib
6) Unpack the package and goto the PDFlib-Lite directory
# tar xvzf pdflib-*.tgz
# cd pdflib-*
7) Create configuration files
# phpize
8 ) Now, configure PDFlib
# ./configure
9) Create the installation files and install PDFlib
# make
# make install
A pdf.so file will be created in the PHPs extension directory which you can locate using the following command. If the file is not created there, copy it from the current location to the extension directory.
# php -i | grep extension_dir
The final step is to add the PDFLib extension in the php.ini file and restart the Web Server.
extension="pdf.so"
Note: Using the above instructions, PDFlib can be installed on the cPanel and Plesk servers as well. Though the location of extension_dir and php.ini are different, they can easily be located using the commands above.

How to install and configure APC Cache on a RedHat/CentOS server?


APC cache is a PHP caching solution which stands for Alternative PHP Cache. It serves PHP pages from a cache thus helping to reduce server load. APC is an open source software and is updated from time to time.
The following are the steps to install APC cache on a CentOS server OR even on a cPanel and Plesk server. All these commands are executed via SSH as a ‘root’ user.
1) Firstly, goto a temporary directory
# cd /usr/local/src
2) Download the latest APC version
# wget http://pecl.php.net/get/APC-3.1.6.tgz
3) Unpack the version and change to the APC directory
# tar -zxf APC-3.1.6.tgz
# cd APC-*
4) Create configuration files
# phpize
5) Search for the “php-config” file since you need to configure APC with it
# which php-config
6) Now, configure APC
./configure --enable-apc --enable-apc-mmap --with-apxs \
--with-php-config=/usr/local/bin/php-config
7) Create the installation files and install APC
# make
# make install
Now that the APC cache is installed on your server, the final step is to activate the APC extension in the php.ini file. Locate the working php.ini file and edit it
# php -i | grep php.ini
Search for “extension_dir” and place the following code under it:
extension="apc.so"
apc.enabled=1
apc.shm_segments=1
apc.shm_size=256
apc.ttl=3600
apc.user_ttl=7200
apc.num_files_hint=1024
apc.mmap_file_mask=/tmp/apc.XXx
apc.enable_cli=1
Save the file and restart the Apache web server. You can tweak the APC settings by editing the above file but make sure you restart the Apache service for the new changes to take affect.
That’s it. To check if APC is activated, execute
# php -i | grep apc
Note: once APC is installed, make sure the ‘apc.so’ file is created under the directory specified at the “extension_dir” in php.ini. If not, look for the exact path of the file and create a symlink under it.

Starting sshd: Missing privilege separation directory: /var/empty/sshd


The SSHD service while restarting, looks for the “/var/empty/sshd/etc” directory which contains a symlink to the ‘localtime’ file and if not found result in “cannot create symbolic link `/var/empty/sshd/etc’: No such file or directory” error message.
The complete error message looks as follows:
-bash-3.2# service sshd restart
cp: cannot create symbolic link `/var/empty/sshd/etc': No such file or directory
Starting sshd: Missing privilege separation directory: /var/empty/sshd
[FAILED]
The solution is to create the “/var/empty/sshd/etc” directory and then create a symlink for localtime file. SSH to your server and execute:
# mkdir /var/empty/sshd/etc
# cd /var/empty/sshd/etc
# ln -s /etc/localtime localtime
Once done, you should be able to restart the sshd service.

How to defragment or optimize a database in Mysql?


In case you remove a lot of data from the tables OR change the database structure, a de-fragmentation/optimizing of the database is necessary to avoid performance loss, especially while running queries. The above changes results in a performance loss, so make sure you run the “optimizer” on the database.
SSH to your server and execute:
mysqlcheck -o <databasename>
where, -o stands for optimize which is similar to defragmentation. You should look to defragment the tables regularly when using VARCHAR fields since these coloumns get fragmented too often.

How to secure the /tmp partition on a VPS with noexec,nosuid option?


How to secure the /tmp and /var/tmp partition on a VPS?
On a VPS, there are 2 ways to mount OR secure /tmp and /var/tmp partitions with the noexec,nosuid option. One way is to mount these partitions from the Node the VPS resides on.
1) Login to the Node server and execute the following command:
# vzctl set VEID --bindmount_add /tmp,noexec,nosuid,nodev --save
# vzctl set VEID --bindmount_add /var/tmp,noexec,nosuid,nodev --save
The “bindmount_add” option is use to mount the partition inside the VPS. The ‘VEID’ is the VPS ID you are working on.
2) The second option is to mount these partition from within the VPS itself. It is useful incase you don’t have access to the Node server. To mount /tmp and /var/tmp from within the VPS, execute:
# mount -t tmpfs -o noexec,nosuid,nodev tmpfs /tmp
# mount -t tmpfs -o noexec,nosuid,nodev tmpfs /var/tmp
To check the mounted ‘tmp’ partitions, execute
root@server [~]# mount | grep tmp
tmpfs on /tmp type tmpfs (rw,noexec,nosuid)
tmpfs on /var/tmp type tmpfs (rw,noexec,nosuid,nodev)

URL Redirection: How to set Frame Forwarding for a domain?


What is Frame Forwarding and How it is set?
A few lines of Explanation:
Frame forwarding (redirection) of a domain is little different than normal forwarding of a domain. In Frame Forwarding, the web site visitors are redirected to another site, but the destination address is not shown, so they do not know about the redirection as opposed to “Normal Forwading” (also called Parked domain) where the web site visitors are redirected to another site and the destination address is shown, so they know about the redirection.
For example, suppose the main website is abc.com and we frame forwarded xyz.com to it. When we access xyz.com, the URL in the address bar of the browser will stay as it is and the contents will be fetched from abc.com. The user won’t notice the redirection.
Solution:
Using the above domain names as example, in order to set Frame Forwarding for xyz.com, first add the domain on the server as we normally do.
Create a index.html file and add the following code
<frameset rows="100%", *' frameborder=no framespacing=0 border=0>
<frame src="http://www.abc.com/"></frame>
</frameset>
This is it.
A Drawback of the above method and a Solution for it:
Now, one thing to notice is, since you are setting up a redirection using a index.html file, any file/directory accessed using a direct URL ( i.e. for example: xyz.com/anyfilename) will result in a “404 Not Found” error. This is because, the request will bypass the redirection set in the index.html file and will search for the file under the xyz.com itself.
To overcome this problem, add xyz.com as a “ServerAlias” for abc.com. This is achived in the VirtualHost entry of the main domain. Edit the Apache configuration
vi /etc/httpd/conf/httpd.conf
Search for the VirtualHost entry of abc.com and make sure the “ServerAlias” line look like the following
ServerAlias www.abc.com xyz.com www.xyz.com
Save the file and restart the Web server
service httpd restart
Now, directly accessing a file or directory of a target domain using the alias domain name will also work.

How to check System Information and vendors of MotherBoard/Processor/RAM in Linux?


To check the system’s information and manufacturers of MotherBoard, Processor, RAM and other hardware from the command line in a Linux machine, install the package “dmidecode”. You can search and install the dmidecode package using yum.
Check your Linux server architecture i.e. 32bit OR 64bit:
# uname -p
Search for the dmidecode package
# yum search dmidecode
Depending on the architecture, install the proper dmidecode package
# yum install dmidecode
You are done. To check all the hardware information of the server, execute
# dmidecode

Shell script to backup a Mysql database and save it on a remote server using Ftp


The following shell script will dump the mysql database and will save the .sql file on a remote location using Ftp. This script will create a backup file including the current date so you can have multiple copies of the backups of the same database under one directory.
Create a file called mysqlbkup.sh
# vi /root/mysqlbkup.sh
and paste the following code in the file as it is.
##############START OF THE SCRIPT##############
#!/bin/bash
# Specify the temporary backup directory
BKUPDIR="/tmp"
# Database Name
dbname="dbname_here"
# store the current date
date=`date '+%Y-%m-%d'`
# Specify Ftp details
ftpserver="FtpServerIP"
ftpuser="username"
ftppass="password"
# Dump the mysql database with the current date and compress it.
#Save the mysql password in a file and specify the path below
/usr/bin/mysqldump -uroot -p`cat /path/to/passfile` $dbname | gzip > $BKUPDIR/$date.$dbname.sql.gz
# Change directory to the backup directory
cd $BKUPDIR
# Upload the backup
ftp -n $ftpserver <<!EOF!
user $ftpuser $ftppass
binary
prompt
mput *.sql.gz
quit
!EOF!
# Remove the local backup file
/bin/rm -f /$BKUPDIR/$date.$dbname.sql.gz
##############END OF THE SCRIPT##############
Save the file and schedule a cronjob to execute the file on daily basis, say during night hours at 1.00AM. Edit the cron file
# crontab -e
and set the following cronjob
0  1  *  *  *  /bin/sh /root/mysqlbkup.sh
save the file and restart the crond service
# service crond service
The script will work on a Linux/Plesk server as well. You just have to replace the mysqldump line in the script with the following
/usr/bin/mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` $dbname | gzip > $BKUPDIR/$date.$dbname.sql.gz
Make sure you assign the db_name, ftpserver/user/pass values properly at the start of the script.
Note: Leave a comment if you have any suggestions, questions OR have received any error message using this script.

Qmail-inject: fatal: mail server permanently rejected message


You see the “qmail-inject: fatal: mail server permanently rejected message” error message while sending emails from a Plesk server and the error message such as follows in the mail logs:


qmail-queue-handlers[xxxx]: Unable to change group ID: Operation not permitted
qmail-queue[xxxx]: files: write buf 0xbff4dfe0[156] to fd (5) error – (32) Broken pipe
qmail-queue[xxxx]: files: cannot write chuck from 4 to 5 – (32) Broken pipe



It is due to the incorrect permission/ownership of the ‘qmail-queue’ file under the “/var/qmail/bin” directory. Make sure
the ownership is ”mhandlers-user:popuser’
the permission is 2511.

Check the current ownership/permission:
# ls -la /var/qmail/bin/qmail-queue
It should be as follows:
-r-x–s–x  1 mhandlers-user popuser 67804 May  4 08:41 /var/qmail/bin/qmail-queue
If not, correct the ownership
# chown mhandlers-user.popuser /var/qmail/bin/qmail-queue
set the proper permissions,
# chmod 2511 /var/qmail/bin/qmail-queue
Restart Qmail once and see if the email works.
Note: If the emails still doesn’t work, please comment this post with the error message and the output of the following command and I will find out the solution for you:
ls -la /var/qmail/bin/qmail-queue*

Fix Error: open /dev/mptctl: No such device. Make sure mptctl is loaded into the kernel


The “mptctl” kernel module is required to check the RAID status on a Linux server using “mpt-status” tool. You may receive “/dev/mptctl: No such device” message OR “Make sure mptctl is loaded into the kernel” while checking the RAID status which indicates that the module is not loaded in the kernel OR the device “/dev/mptctl” is not created.
[root@server ~]# mpt-status
open /dev/mptctl: No such device
Are you sure your controller is supported by mptlinux?
Make sure mptctl is loaded into the kernel
To create the mptctl device, execute:
# mknod /dev/mptctl c 10 22
and verify it by
# ls -la /dev/mptctl
crw——- 1 root root 10, 220 Apr 12 08:05 /dev/mptctl
Once the device is created, load the module in the kernel using modprobe
# modprobe mptctl
and verify it using lsmod which should list the following modules along with it’s details
# lsmod |grep mptctl
mptctl
mptbase
scsi_mod
Add the module in /etc/rc.modules file to load the module in kernel on every reboot.
modprobe mptctl
Save the file. This will make sure the module is loaded in the kernel on every server reboot.
Once done, you will be able to check the RAID status by executing the ‘mpt-status’ command and it should show something like the following:
[root@server ~]# mpt-status
ioc0 vol_id 0 type IM, 2 phy, 465 GB, state OPTIMAL, flags ENABLED
ioc0 phy 1 scsi_id 1 ATA ST3500418AS CC38, 465 GB, state ONLINE, flags NONE
ioc0 phy 0 scsi_id 4 ATA ST3500418AS CC38, 465 GB, state ONLINE, flags NONE

Invalid command “SSLEngine”, perhaps misspelled OR defined by a module not included.


We install a SSL certificate on a domain to secure the transaction carried out on a website but sometimes we receive an error message
“Invalid command “SSLEngine”, perhaps misspelled or defined by a module not included in the server configuration”
while browsing the website. The error message indicates that the module mod_ssl required to run SSL engine on a CentOS server is missing and needs to be installed.
Install the mod_ssl module using yum
#yum install mod_ssl
Once it is installed, make sure to restart the Apache service
#service httpd restart
You should now be able to browse the website using https.

Howto: Check Memory/RAM usage in Linux


How to check Memory (RAM) usage in Linux OR different ways to check RAM usage in Linux?
Memory OR widely known as RAM is known to be one of the important component on the server which make sure the tasks performed on your server are processed fast enough. Higher the availibility of physical memory, more stable is your server during high resource usage processes.
Linux offer various tools to check Memory/RAM usage of your server such as free, top, sar, vmstat etc using which you can deside whether to optimize softwares to use less memory OR whether it’s time to upgrade memory on the server.
1) ‘free’ command: one of the easiest way to check the RAM usage:
free -m
will display physical memory as well as Swap


free -m -t
same as above but it will display the total of physical and swap memory at the bottom.


2) ‘top’ command: The top command displays the real time values of the running system and are continously updated (by default 3 seconds). The two rows “Mem and Swap” displays the total, used and free amount of RAM/Swap. Though the values displayed are in kbs and not human readable, it is just one another way to check the usage.

3) ‘sar’ command: is included in the ‘sysstat’ package and is not installed by default. To install ‘sysstat’ package, execute:
yum install sysstat
Once ‘sysstat’ package is installed, start the service
service sysstat start
sysstat package when installed, provider ‘sar’ command which collects system activity information and saves it in a file before displaying it on a standard output.
sar -r
displays Memory/Buffer/Swap information horizontally.


4) /proc/meminfo file: which displays everything about the RAM on your server.
cat /proc/meminfo

Starting sshd: Privilege separation user does not exist


The error message “Starting sshd: Privilege separation user sshd does not exist FAILED” is received on restarting the SSHD service. It indicates that the user ‘sshd’ does not exist at all. To fix the sshd privileges issue, you need to add the ‘sshd’ user on the server.
Edit the file /etc/passwd and add the below line:
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
and the below line in the /etc/group file
sshd:x:74:
You will now be able to restart the sshd service.
# /etc/init.d/sshd restart
Stopping sshd: [ OK ]
Starting sshd: [ OK ]
Another solution is to disable UsePrivilegeSeparation. Edit the sshd configuration file at /etc/ssh/sshd_config and change
UsePrivilegeSeparation yes
to
UsePrivilegeSeparation no
It is less secure but just another option.


How to secure the SSHD service?


SSH service can be secured in various ways like changing the SSH port, changing the ssh protocol,  ssh ListenAddress, disable root login with the PermitRootLogin parameter, allowing ssh access to specific users, restricting SSH access to specific IPs etc. These steps will make sure SSH service on your server is secure.
Edit the SSHD configuration and make the changes listed below:
vi /etc/ssh/sshd_config
1) Set the default SSH port 22 to a higher value, by changing the ‘Port’ directive
Port 2233
2) To make SSH work on a secure protocol, set the ‘Protocol’ directive as
Protocol 2
3) Bind SSHD service to a specific IP of the server, which you can achieve by replacing ‘#ListenAddress’ directive to
ListenAddress xx.xx.xx.xx
where, xx.xx.xx.xx is the additional IP of the server and the only one which will allow you to SSH into the server.
4) To disable root access, set ‘PermitRootLogin’ directive to ‘no’
PermitRootLogin no
Make sure you add an alternate SSH user on the server who have privileges to gain root access before disabling this option.
5) To allow SSH access to specific users, add the “AllowUsers” directive at the end of the configuration
AllowUsers user1 user2
This will allow SSH access to users user1 and user2. You need to allow SSH access to the user who is allowed to gain root access incase root access is disabled.
Save the file and restart the sshd service
service sshd restart
6) Using the TCP wrappers i.e. hosts.allow and hosts.deny, you can restrict SSH access to specific IPs i.e. edit /etc/hosts.allow and add the following
sshd : yourlocalip: allow
sshd : all : deny
“yourlocalip” is the one assigned by your ISP. It will restrict SSH access to your local IP only.

Mysql: Access denied for user ‘root’@'localhost’


You may receive the “Access denied for user ‘root’@'localhost’” message while accessing mysql from the command prompt. The error message states that the Mysql password for user ‘root’ is incorrect and need to reset the password using skip-grant-tables option.
ERROR 1045 (28000): Access denied for user ‘root’@'localhost’ (using password: NO)
How to reset a Mysql password for ‘root’?
# /etc/init.d/mysql stop
Make sure all the mysql processes are stopped by executing the killall command
# killall -9 mysqld
Next, connect to mysql server using the skip-grant-tables method.
# /usr/bin/mysqld_safe –skip-grant-tables &
now, execute ‘mysql’ and you will be at the mysql prompt
# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 23056
Server version: xx.xx-community MySQL Community Edition (GPL)
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql>
Goto the ‘mysql’ database and update the password for user ‘root’ in the “user” table.
mysql> use mysql;
To set a password, execute
mysql> update user set password=PASSWORD(“passhere”) where user=’root’;
OR to set a blank password, execute the mysql ‘update user’ query
mysql> update user set password=PASSWORD(“”) where user=’root’;
Once done, reload privileges and quit
mysql> flush privileges;
mysql> quit
Now, restart the mysql service
# /etc/init.d/mysql restart
and you should be able to connect mysql server:
# mysql
OR
# mysql -uroot -p

How to install SuPHP/phpSuExec on Plesk?


How to install SuPHP/phpSuExec on a Plesk server?
SuPHP Or PHPSuExec is a module that increases the security of the server and executes PHP files under the ownership of the owner of the file instead of the Apache user i.e. “apache”.
The advantages of having suPHP are:
1. Files and Directories those need 777 permissions to write into, via the browser will now need a maximum of 755 permissions. The files/directories with 777 permissions will result in an “Internal Server Error”.
2. If you need to manipulate the value of a php directive for a domain, for ex. register_globals, it needs to be placed in the php.ini of a domain instead of the .htaccess file as it will result in an “Internal Server Error”.
3. All the files and directories uploaded using a script will have the ownership of the user instead of user ‘apache’ (i.e. the Apache user).
4. A user can edit/remove the files using Ftp that are uploaded via the browser.
In order to install SuPHP on the server, download and install the atomic script
# wget -q -O - http://www.atomicorp.com/installers/atomic | sh
Once the script is installed, install SuPHP module using yum
# yum install mod_suphp
The next step is to load the SuPHP module with Apache. The suphp installation automatically creates a “suphp.conf” file under the Apache configuration directory, if not create it.
# vi /etc/httpd/conf.d/suphp.conf
and insert the following lines:
#Load the Mod_SuPHP module
LoadModule suphp_module modules/mod_suphp.so
php_admin_value engine off
# Enable handlers
suPHP_AddHandler x-httpd-php
AddHandler x-httpd-php .php .php3 .php4 .php5
#Enable the SuPHP engine
suPHP_Engine on
Apache calls all the configuration files from the /etc/httpd/conf.d directory by default so there is no need to include the module in the httpd.conf file separately.
Now,  configuration file under /etc should be present (if not create it)
vi /etc/suphp.conf
copy/paste the following contents as it is:
[global]
logfile=/var/log/suphp.log
loglevel=info
webserver_user=apache
docroot=/var/www/vhosts
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false
check_vhost_docroot=false
errors_to_browser=false
env_path=/bin:/usr/bin
umask=0022
min_uid=500
min_gid=500

[handlers]
x-httpd-php="php:/usr/bin/php-cgi"
x-suphp-cgi="execute:!self"
Make sure the “handle_userdir” directive is commented or removed from the file since it is deprecated from the latest version.
At the end, we have to restart the httpd service for all these changes to take effect
# service httpd restart
Test the SuPHP installation: Create a phpinfo.php file with 777 permission and it should show you an “Internal Server Error” on browsing.


How to enable ‘General Query Log’ in Mysql?


General Query Log is used to keep track of mysql status i.e. it writes the information when a client connects/disconnects OR a query is executed. It is useful when the number of people managing the database is high. In order to enable ‘General Query Log’,
edit the Mysql configuration file
vi /etc/my.cnf
enable the log under the ‘mysqld’ section
log=/var/log/mysql.general.log
Save the file. Now create the log file and set the mysql ownership
touch /var/log/mysql.general.log
chown mysql.mysql /var/log/mysql.general.log
Now, restart the mysql service
/etc/init.d/mysql restart
You can now execute the queries using phpMyAdmin OR 3rd party sql software and watch the logs
tail -f /var/log/mysql.general.log

Yum update. Error: rpmdb open failed


The “rpmdb open failed” error message is mostly received when the rpm databases __db.00* located under /var/lib/rpm directory are corrupted. This results in a “error: cannot open Packages database” message while installation/updatation of a package via yum.


root@server [~]# yum update
Loaded plugins: fastestmirror
error: no dbpath has been set
error: cannot open Packages database in /%{_dbpath}
Traceback (most recent call last):
File “/usr/bin/yum”, line 29, in ?
yummain.user_main(sys.argv[1:], exit_code=True)
File “/usr/share/yum-cli/yummain.py”, line 309, in user_main
errcode = main(args)
File “/usr/share/yum-cli/yummain.py”, line 157, in main
base.getOptionsConfig(args)
File “/usr/share/yum-cli/cli.py”, line 187, in getOptionsConfig
self.conf
File “/usr/lib/python2.4/site-packages/yum/__init__.py”, line 664, in <lambda>
conf = property(fget=lambda self: self._getConfig(),
File “/usr/lib/python2.4/site-packages/yum/__init__.py”, line 239, in _getConfig
self._conf = config.readMainConfig(startupconf)
File “/usr/lib/python2.4/site-packages/yum/config.py”, line 804, in readMainConfig
yumvars['releasever'] = _getsysver(startupconf.installroot, startupconf.distroverpkg)
File “/usr/lib/python2.4/site-packages/yum/config.py”, line 877, in _getsysver
idx = ts.dbMatch(‘provides’, distroverpkg)
TypeError: rpmdb open failed

The common fix is to delete the rpm databases and run rebuilddb, like
# yum clean all
# rm -f /var/lib/rpm/__db*
# rpm –rebuilddb
# yum update
However, in case of a VPS, yum may still not work with rebuilding rpm database and you have to try create a /dev/urandom device. Login to your VPS and execute
# rm /dev/urandom
# mknod -m 644 /dev/urandom c 1 9
The problem may reoccur on a VPS reboot so to fix the problem permanently, login to Hardware Node and execute:
# vzctl stop VEID
# mknod –mode 644 /vz/private/VEID/fs/root/dev/urandom c 1 9
# vzctl start VEID

How to redirect a website using .htaccess?


How to redirect a website using .htaccess?
Redirect website http://mydomain.com to http://www.mynewdomain.com
RewriteEngine on
RewriteCond %{HTTP_HOST} ^mydomain\.com$
RewriteRule ^(.*)$ http://www.mynewdomain.com [R=301,L]
Redirect website mydomain.com with and without www requests to http://www.mynewdomain.com
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.mydomain\.com$ [OR]
RewriteCond %{HTTP_HOST} ^mydomain\.com$
RewriteRule ^(.*)$ http://www.mynewdomain.com [R=301,L]
Redirect requests from http://mydomain.com to http://mydomain.com/subdirectory i.e. redirecting requests from main domain to a sub-directory.
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www\.mydomain\.com$ [OR]
RewriteCond %{HTTP_HOST} ^mydomain\.com$
RewriteRule ^(.*)$ http://www.mydomain.com/subdirectory/ [R=301,L]
Redirect all http (80) requests of a domain to https (443) i.e. redirecting requests from non-secure port to a secure port.
RewriteEngine On
RewriteCond %{SERVER_PORT}      !443
RewriteRule ^(.*)$ https://mydomain.com/$1 [R,L]

Script to email failed Ftp login attempts for FTP server


Shell Script to search Failed Ftp Login Attempts
This Shell script will search the server logs on daily basis and will email you the Failed Ftp Login Attempts of the day. The ftp logs are saved in the /var/log/messages file as by default there is no separate log file for Ftp in Linux.
Create a file /home/script/failedftp.sh and paste the below code:
#!/bin/bash
#Retrieve the current date
CUR_DATE=`date +”%b %e”`
#Create a temporary file to store the logs
touch /tmp/out.txt

echo “List Follows” > /tmp/out.txt
#Search the failed attempts and save in the temporary file
/bin/grep “$CUR_DATE” /var/log/messages | grep pure-ftpd | grep failed >> /tmp/out.txt
#Email the contents of the file to your email address
/bin/mail -s “Failed Ftp Login Attempts on ‘$CUR_DATE’ ” youremail@yourdomain.com < /tmp/out.txt
Save the file. You now have to schedule a cron to execute the file once in a day to search logs. Edit the cron file
crontab -e
and add the following cron job
59 23 * * * /bin/sh /home/script/failedftp.sh
Note:
1) This script will work with Pure-Ftpd server. You will have to edit the search string a bit according to your Ftp server.
2) If you copy/paste the script as it is in shell, the single and double quotes may change to dots (.) so make sure you correct them before executing the script.

Howto: Disable Directory Listing on WebServer


How to Disable Directory Listing? You may want to hide directory listings because by default Webservers look for an index file under every directory and if not found, they list the files and directories under it on browsing the directory.
To disable Directory Listing for an account recursively:
1) Create a .htaccess file under the directory
vi .htaccess
2) Add Options directive as follows:
Options -Indexes
3) Save the file.
You now will see a Forbidden message on accessing a directory that do not include an index file.

Error: Unable to create the domain because a DNS record exists


Error:
Error message “Error: Unable to create the domain example.com because a DNS record pointing to the host example.com already exists.”
The error message is displayed when you add a domain from Plesk control panel and it fails. The reason it fails is because the DNS records of the domain already exist in the psa database. The tables dns_recs and dns_zone holds the DNS records for a domain.
In order to add the domain example.com, you will have to remove the DNS entries from the tables dns_recs and dns_zone.
1) Goto Mysql prompt:
root@host [~]# mysql -uadmin -p `cat /etc/psa/.psa.shadow`
2) Use the psa database
mysql>  use psa;
3) Remove the DNS entries from the dns_recs and dns_zone tables:
mysql> delete from dns_recs where dns_zone_id=10;
mysql> delete from dns_zone where id=10;
where, 10 is the dns_zone_id of the domain example.com.
4) Restart the mysql service:
root@host [~]# service mysqld restart
You should now be able to add the domain from Plesk control panel successfully.