Search     or:     and:
  Краткое описание
 W. R. Стивенс TCP 
 W. R. Стивенс IPC 
 K. Bauer 
 Gary V. Vaughan 
 Д Вилер 
 В. Сталлинг 
 Pramode C.E. 
 Steve Pate 
 William Gropp 
 С Бекман 
 Р Стивенс 
 Mendel Cooper 
 М Перри 
 C.S. Rodriguez 
 Robert Love 
 Daniel Bovet 
 Д Джеф 
 G. Kroah-Hartman 
 B. Hansen 
Последние статьи :
  Тренажёр 16.01   
  Эльбрус 05.12   
  Алгоритмы 12.04   
  Rust 07.11   
  Go 25.12   
  EXT4 10.11   
  FS benchmark 15.09   
  Сетунь 23.07   
  Trees 25.06   
  Apache 03.02   
TOP 20
 Secure Programming for Li...199 
 Stein-MacEachern-> Час...190 
 Daniel Bovet 1...175 
 Kamran Husain...154 
 2.0-> Linux IP Networking...146 
 Rodriguez 6...134 
 Steve Pate 3...127 
 Httpd-> История Ap...114 
 Steve Pate 1...110 
 Stewens -> IPC 4...109 
 2.6-> VM 2.6...107 
 Ext4 FS...102 
 Ethreal 4...94 
  01.05.2022 : 3295756 посещений

Раздел II: Настройка

Chapter List

Chapter 4: Synchronizing Servers with RYSNC and SSH
Chapter 5: Cloning Systems with Systemimager
Chapter 6: Heartbeat Introduction and Theory
Chapter 7: A Sample Heartbeat Configuration
Chapter 8: Heartbeat Resources and Maintenance
Chapter 9: Stonith and Ipfail

Часть 4: Синхронизация серверов с RYSNC и SSH

This chapter describes one method of automating the copying of data and configuration files from one server to another. In its simplest form, synchronizing the data on two (or more) servers is just a matter of copying files from one server to another. One server acts as a primary repository for data, and changes to the data can only be made on this server (in a high-availability configuration, only one server owns a resource at any given point in time). A regularly scheduled copy utility then sends the data after it has been changed on the primary server to the backup server so it is ready to take ownership of the resource if the primary server crashes.

In a cluster configuration all nodes need to access and modify shared data (all cluster nodes offer the same services), so you will probably not use this method of data synchronization on the nodes inside the cluster. You can, however, use the method of data synchronization described in this chapter on highly available server pairs to copy data and configuration files that change infrequently.


The open source software package rsync that ships with the Red Hat distribution allows you to copy data from one server to another over a normal network connection. rsync is more than a replacement for a simple file copy utility, however, because it adds the following features:

  • Examines the source files and only copies blocks of data that change.[1]

  • (Optionally) works with the secure shell to encrypt data before it passes over the network.

  • Allows you to compress the data before sending it over the network.

  • Will automatically remove files on the destination system when they are removed from the source system.

  • Allows you to throttle the data transfer speed for WAN connections.

  • Has the ability to copy device files (which is one of the capabilities that enables system cloning as described in the next chapter).

Online documentation for the rsync project is available at (and on your system by typing man rsync).


rsync must (as of version 2.4.6) run on a system with enough memory to hold about 100 bytes of data per file being transferred.

rsync can either push or pull files from the other computer (both must have the rsync software installed on them). In this chapter, we will push files from one server to another in order to synchronize their data. In Chapter 5, however, rsync is used to pull data off of a server.


The Unison software package can synchronize files on two hosts even when changes are made on both hosts. See the Unison project at unison, and also see the Csync2 project for synchronizing multiple servers at

Because we want to automate the process of sending data (using a regularly scheduled cron job) we also need a secure method of pushing files to the backup server without typing in a password each time we run rsync. Fortunately, we can accomplish this using the features of the Secure Shell (SSH) program.

[1]Actually, rsync does a better job of reducing the time and network load required to synchronize data because it creates signatures for each block and only passes these block signatures back and forth to decide which data needs to be copied.

Open SSH 2 and rsync

The rsync configuration described in this chapter will send all data through an Open Secure Shell 2 (henceforth referred to as SSH) connection to the backup server. This will allow us to encrypt all data before it is sent over the network and use a security encryption key to authenticate the remote request (instead of relying on the plaintext passwords in the /etc/passwd file).

A diagram of this configuration is shown in Figure 4-1.

Image from book
Figure 4-1: rsync and SSH

The arrows in Figure 4-1 depict the path the data will take as it moves from the disk drive on the primary server to the disk drive on the backup server. When rsync runs on the primary server and sends a list of files that are candidates for synchronization, it also exchanges data block signatures so the backup server can figure out exactly which data changed, and therefore which data should be sent over the network.

SSH: A Secure Data Transport

Because we will push data from the primary server to the backup server, we will configure the primary server as an SSH client and then turn the backup server into an SSH server capable of accepting this data. The SSH connection between these two servers will need to enforce the following security configuration:

  • When the primary server initiates a connection to the backup server, it should verify that it is in fact talking to the true backup server (and not a bogus host created by a hacker trying to hijack our data).

  • The backup server should only accept connections from the primary server. In fact, the backup server should only accept connections from one trusted account on the primary server.

  • All data sent over the network connection should be encrypted.

Using these security features, our data replication method can also be used on an untrusted public network such as the Internet.

This security configuration will also need to be fully automated (we should never need to walk over to the system console and enter a password to get the SSH connection working), and it should survive system reboots.

We can accomplish all of these security configuration goals using SSH encryption keys that allow us to establish a two-way trust relationship between the client and the server.

SSH Encryption Keys

Encryption keys, or asymmetric encryption keys (to be exact), always have two halves: a public half and a private half. The private half should be zealously guarded and never transmitted over the network. The public half should be given to anyone who wants, or needs, to send encrypted data. For SSH to work properly, we need to configure two types of encryption keys: the SSH host encryption key and the user encryption key.

The Host Encryption Key

The host encryption key is configured once for each SSH server and stored in a file on a locally attached disk drive. Red Hat stores the private and public halves of this key in separate files:[2]


This key, or these key files, are created the first time the /etc/rc.d/init.d/sshd script is run on your Red Hat system. However, we really only need to create this key on the backup server, because the backup server is the SSH server.

The User Encryption Key

This key is created on the primary server manually by running the ssh-keygen utility. This key is stored in two files in the user's home directory in the .ssh subdirectory. For user web with a home directory of /home/web, the key would be stored in the following two files:


The public half of this key is then copied to the backup server (the SSH server) so the SSH server will know this user can be trusted.

Session Keys

As soon as the client decides to trust the server, they establish yet another encryption key: the session key. As its name implies, this key is valid for only one session between the client and the server, and it is used to encrypt all network traffic that passes between the two machines. The SSH client and the SSH server create a session key based upon a secret they both share— called the shared secret. You should never need to do anything to administer the session key, because it is generated automatically by the running SSH client and SSH server (information used to generate the session key is stored on disk in the /etc/ssh/primes file).

Establishing a two-way trust relationship between the SSH client and the SSH server is a two-step process. In the first step, the client figures out if the server is trustworthy and what session key it should use. Once this is done, the two machines establish a secure SSH transport.

The second step occurs when the server decides it should trust the client.

Establishing the Two-Way Trust Relationship: Part 1

Let's examine how this two-way trust relationship is established in more detail. First, the client must decide whether it can trust the server; then, the server must decide whether it can trust the client. So Part 1 of establishing the two-way trust relationship is when the client computer determines whether it is talking to the proper server.

Should the SSH Client Trust the SSH Server?

The first step in building a secure SSH transport is initiated by the client. The client needs to know if it is talking to the right server (no one has hijacked the IP address or hostname of our SSH server). This is accomplished during the SSH key exchange processes.


If you need to reuse an IP address on a different machine and retrain the security configuration you have established for SSH, you can do so by copying the SSH key files from the old machine to the new machine.

The SSH Key Exchange Process

The SSH key exchange process starts when an SSH client sends a connection request on port 22 to the SSH server (as shown in Figure 4-2).

Image from book
Figure 4-2: SSH client connection request

Once the server and the client make some agreements about the SSH protocol, or version, they will use to talk to each other (SSH version 1 or SSH version 2), they begin a process of synchronized mathematical gymnastics that only requires a minimum amount of data to pass over the network. The goal of these mathematical gymnastics is to send only enough information for the client and server to agree upon a shared secret for this SSH session without sending enough information over the network for an attacker to guess the shared secret. Currently, this is called the diffie-hellman-group1-sha1 key exchange process.

The last thing the SSH server sends back to the client at the end of the diffie-hellman-group1-sha1 key exchange process is the final piece of data the client needs to figure out the shared secret along with the public half of its host encryption key (as shown in Figure 4-3).

Image from book
Figure 4-3: SSH server response

The SSH server never sends the entire SSH session key or even the entire shared secret over the network, making it nearly impossible for a network-snooping hacker to guess the session key.

At this point, communication between the SSH client and the SSH servers stops while the SSH client decides whether or not it should trust the SSH server. The SSH client does this by looking up the server's host name in a database of host name-to-host-encryption-key mappings called known_hosts2 (referred to as the known_hosts database in this chapter[3]). See Figure 4-4. If the encryption key and host name are not in the known_hosts database, the client displays a prompt on the screen and asks the user if the server should be trusted.

Image from book
Figure 4-4: The SSH client searches for the SSH server name in its known_hosts database

If the host name of the SSH server is in the known_hosts database, but the public half of the host encryption key does not match the one in the database, the SSH client does not allow the connection to continue and complains.

 Someone could be eavesdropping on you right now (man-in-the-middle attack)!
 It is also possible that the RSA host key has just been changed.
 The fingerprint[4] for the RSA key sent by the remote host is 6e:36:4d:03:ec:2c:6b:fe:33:c1:e3:09:fc:aa:dc:0e. Please contact your system administrator.
 Add correct host key in /home/web/.ssh/known_hosts2 to get rid of this
 Offending key in /home/web/.ssh/known_hosts2:1
 RSA host key for has changed and you have requested strict checking.

(If the host key on the SSH server has changed, you will need to remove the offending key from the known_hosts database file with a text editor such as vi.)

Once the user decides to trust the SSH server, the SSH connection will be made automatically for all future connections so long as the public half of the SSH server's host encryption key stored in its known_hosts database doesn't change.

The client then performs a final mathematical feat using this public half of the SSH server host encryption key (against the final piece of shared secret key data sent from the server) and confirms that this data could only come from a server that knew the private half of the public host encryption key it received from the SSH server. Once the client has successfully done this, the client and the server are said to have established a secure SSH transport (see Figure 4-5).

Image from book
Figure 4-5: The SSH client and SSH server establish an SSH transport

Now, all information sent back and forth between the client and the server can be encrypted with the session key (the encryption key that was derived on both the client and the server from the shared secret).

Establishing the Two-Way Trust Relationship: Part 2

We've described half of the process for establishing the two-way trust relationship between the client and the server. Now, for Part 2, we need to discuss the other half, which takes places on the SSH server.

Can the Server Trust the Client?

We have solved only half the problem of a two-way trust between the client and the server; the client trusts the server—now the server needs to decide whether it should trust the client.

This can be accomplished using either hostbased authentication or user authentication.

Hostbased Authentication

Historically, hostbased authentication was accomplished using nothing more than the client's source IP address or host name. Crackers could easily break into servers that used this method of trust by renaming their server to match one of the acceptable host names, by spoofing an IP address, or by circumventing the Domain Name System (DNS) used to resolve host names into IP addresses.

System administrators, however, are still tempted to use hostbased authentication because it is easy to configure the server to trust any account (even the root account) on the client machine. Unfortunately, even with the improvements SSH has introduced, hostbased authentication is still cumbersome and weak when compared to the other methods that have evolved, and it should not be used.

User Authentication

When using user authentication, the SSH server does not care which host the SSH connection request comes from, so long as the user making the connection is a trusted user.

SSH can use the user's normal password and account name for authentication over the secure SSH transport. If the user has the same account and password on the client and server, the SSH server knows this user is trustworthy. The main drawback to this method, however, is that we need to enter the user's password directly into our shell script on the SSH client to automate connections to the SSH server. So this method should also not be used. (Recall that we want to automate data synchronization between the primary and backup servers.)

User Authentication using SSH Encryption Keys

Fortunately, a method that is better than either of these two methods is available using SSH encryption keys. All we have to do is store the public half of a user's encryption key on the SSH server. When we try to connect to the SSH server from the SSH client, the server will send us a challenge (a random number encrypted with the public half of the user encryption key), and only a client that knows the private half of the encryption key will be able to decrypt this challenge.

The private half of the user's encryption key on the SSH client is normally also protected by a passphrase. A passphrase is just a long password (it can contain spaces) selected by the user that is used to encrypt the private half of the user's real encryption key before it is stored on disk. We will not use a passphrase, however, because the passphrase would need to be added to our shell script to automate it.


You can instead use the Linux Intrusion Detection System, or LIDS, to deny anyone (even root) from accessing the private half of our user encryption key that is stored on the disk drive. See also the ssh-agent man page for a description of the ssh-agent daemon.

Here is a simple two-node SSH client-server recipe.

[2]You will also see DSA files in the /etc/ssh directory. The Digital Signature Algorithm, however, has a suspicious history and may contain a security hole. See SSH, The Secure Shell: The Definitive Guide, by Daniel J. Barrett and Richard E. Silverman (O'Reilly and Associates).

[3]Technically, the known_hosts database was used in Open SSH version 1 and renamed known_hosts2 in Open SSH version 2. Because this chapter only discusses Open SSH version 2, we will not bother with this distinction in order to avoid any further cumbersome explanations like this one.

[4]You can verify the "fingerprint" on the SSH server by running the following command: #ssh-keygen -l -f /etc/ssh/

Two-Node SSH Client-Server Recipe

List of ingredients:

  • Primary server (preferably with two Network Interface Cards and cables)

  • Backup server (preferably with two Network Interface Cards and cables)

  • Crossover cable or mini hub

  • Copy of the rsync RPM file

  • Copy of Open SSH (version 2.2 or later)

Included on the CD-ROM with this book is a copy of the Open SSH package and the rsync package in RPM format in the chapter4 subdirectory. Your distribution should already contain Open SSH. To find out which version is loaded on your system, type:

 #rpm -qa | grep ssh

If you do not have version 2.2 or later, you will need to install a newer version either from your distribution vendor or from the CD-ROM included with this book before starting this recipe.


Please check for the latest versions of these software packages from your distribution or from the project home pages so you can be sure you have the most recent security updates for these packages.

Create a User Account that Will Own the Data Files

Perform this step on the SSH client and the SSH server. The SSH client in this recipe is the primary server at IP address The SSH server is the backup server at IP address (see Figure 4-1).

You can use the root account or any user account (you can create a special non-root account dedicated to providing SSH access). You can then grant this user access to commands or scripts as the root user using the sudo program (see man sudo for more information). But if you want to take a shortcut and just use the root account, you will need to modify the default SSHD config file (/etc/ssh/sshd_config) that comes with the Red Hat distribution and change the PermitRootLogin variable so it looks like this:

 PermitRootLogin without-password

In this recipe, we will use the more secure technique of creating a new account called web and then giving access to the files we want to transfer to the backup node (in the directory /www) to this new account; however, for most high-availability server configurations, you will need to use the root account or the sudo program.

  1. Add the web user account with the following command:

     #useradd -u 100 -g 48 -d /home/web -s /bin/bash -c "Web File Owner" -p
     password web

    For added security, we will (later in this recipe) remove the shell and password for this account in order to prevent normal logons as user web.

    The useradd command automatically created a /home/web directory and gave ownership to the web user, and it set the password for the web account to password.

  2. Now create the /www directory and place a file or two in it:

     #mkdir /www
     #mkdir /www/htdocs
     #echo "This is a test" > /www/htdocs/index.html
     #chown -R web /www

    These commands create the /www/htdocs directory and subdirectory and place the file index.html with the words This is a test in it. Ownership of all of these files and directories is then given to the web user account we just created.

Configure the Open SSH2 Host Key on the SSH Server

If you installed the sshd daemon on your system using the Red Hat RPM file, then all you need to do to create your SSH host keys on the backup server is start the sshd daemon. Check to see if it is already running on the backup server with the command:

 #ps -elf | grep sshd

If it is not running, start it with the command:

 #/etc/init.d/sshd start


 #service sshd start

If sshd was not running, you should make sure it starts each time the system boots—see Chapter 1.

Use the following two commands to check to make sure the Open SSH version 2 (RSA) host key has been created. These commands will show you the public and private halves of the RSA host key created by the sshd daemon.

The private half of the Open SSH2 RSA host key:

 #cat /etc/ssh/ssh_host_rsa_key

The public half of the Open SSH2 RSA host key:

 #cat /etc/ssh/

The /etc/rc.d/init.d/sshd script on Red Hat systems will also create Open SSH1 DSA host keys (that we will not use) with similar names.

Create a User Encryption Key for this New Account on the SSH Client

To create a user encryption key for this new account:

  1. Log on to the SSH client as the web user you just created. If you are logged on as root, use the following command to switch to the web account:

     #su web
  2. Make sure you are the web user with the command:

  3. Now create an Open SSH2 key with the command:

     #ssh-keygen -t rsa

    The output of this command will look like this:

     Generating public/private rsa key pair.
     Enter file in which to save the key (/home/web/.ssh/id_rsa):
     Enter passphrase (empty for no passphrase): <return> 
     Enter same passphrase again: <return> 
     Your identification has been saved in /home/web/.ssh/id_rsa.
     Your public key has been saved in /home/web/.ssh/
     The key fingerprint is:

    Leave the lines marked with arrows () blank.


    We do not want to use a passphrase to protect our encryption key, because the SSH program would need to prompt us for this passphrase each time a new SSH connection was needed.[5]

    As the output of the above command indicates, the public half of our encryption key is now stored in the file:

  4. View the contents of this file with the command:

     #cat /home/web/.ssh/

    You should see something like this:

     ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAz1LUbTSq/

    This entire file is just one line.

Copy the Public Half of the User Encryption Key From the SSH Client to the SSH Server

Because we are using user authentication with encryption keys to establish the trust relationship between the server and the client computer, we need to place the public half of the user's encryption key on the SSH server. Again, you can use the web user account we have created for this purpose, or you can simply use the root account.

  1. Use the copy-and-paste feature of your terminal emulation software (or use the mouse at one of the system consoles)—even if you are just typing at a plain-text terminal, this will work on Red Hat—and copy the above line (from the file on the SSH client) into the paste buffer.[6]

  2. From another window, log on to the SSH server and enter the following commands:

     #mkdir /home/web/.ssh
     #vi /home/web/.ssh/authorized_keys2

    This screen is now ready for you to paste in the private half of the web user's encryption key.

  3. Before doing so, however, enter the following:

     from="",no-X11-forwarding,no-agent-forwarding, ssh-dss

    This adds even more restriction to the SSH transport.[7] The web user is only allowed in from IP address, and no X11 or agent forwarding (sending packets through the SSH tunnel on to other daemons on the SSH server) is allowed. (In a moment, we'll even disable interactive terminal access.)

  4. Now, without pressing ENTER, paste in the public half of the web user's encryption key (on the same line). This file should now be one very long line that looks similar to this:

     ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAz1LUbTSq/
  5. Set the security of this file:

     #chmod 600 /home/web/.ssh/authorized_keys2

See man sshd and search for the section titled AUTHORIZED_KEYS FILE FORMAT for more information about this file.

You should also make sure the security of the /home/web and the /home/web/.ssh directories is set properly (both directories should be owned by the web user, and no other user needs to have write access to either directory).

The vi editor can combine multiple lines into one using the keys SHIFT+J. However, this method is prone to errors, and it is better to use a telnet copy-and-paste utility (or even better, copy the file using the ftp utility or use the scp command and manually enter the user account password).

Test the Connection From the SSH Client to the SSH Server

Log on to the SSH client as the web user again, and enter the following command:

 #ssh hostname

Because this is the first time you have connected to this SSH server, the public half of the SSH server's host encryption key is not in the SSH client's known_hosts database. You should see a warning message like the following:

 The authenticity of host ' (' can't be established.
 RSA key fingerprint is a4:76:aa:ed:bc:2e:78:0b:4a:86:d9:aa:37:bb:2c:93.
 Are you sure you want to continue connecting (yes/no)?

When you type yes and press ENTER, the SSH client adds the public half of the SSH server's host encryption key to the known_hosts database and displays:

 Warning: Permanently added '' (RSA) to the list of known hosts.

To avoid this prompt, we could have copied the public half of the SSH server's host encryption key into the known_hosts database on the SSH client.

The SSH client should now complete your command. In this case, we asked it to execute the command hostname on the remote server, so you should now see the name of the SSH server on your screen and control returns to your local shell on the SSH client.

A new file should have been created (if it did not already exist) on the SSH client called /home/web/.ssh/known_hosts2.

Instead of the remote host name, your SSH client may prompt you for a password with a display such as the following:

 web@'s password:

This means the SSH server was not able to authenticate your web user account with the public half of the user encryption key you pasted into the /home/web/.ssh/authorized_keys2 file. (Check to make sure you followed all of the instructions in the last section.)

If this doesn't help, try the following command to see what SSH is doing behind the scenes:

 #ssh -v hostname

The output of this command should display a listing similar to the following:

 OpenSSH_2.5.2p2, SSH protocols 1.5/2.0, OpenSSL 0x0090600f
 debug1: Seeding random number generator
 debug1: Rhosts Authentication disabled, originating port will not be trusted.
 debug1: ssh_connect: getuid 100 geteuid 0 anon 1
 debug1: Connecting to [] port 22.
 debug1: Connection established.
 debug1: unknown identity file /home/web/.ssh/identity
 debug1: identity file /home/web/.ssh/identity type -1
 debug1: unknown identity file /home/web/.ssh/id_rsa
 debug1: identity file /home/web/.ssh/id_rsa type -1
 debug1: identity file /home/web/.ssh/id_dsa type 2
 debug1: Remote protocol version 1.99, remote software version OpenSSH_2.5.2p2
 debug1: match: OpenSSH_2.5.2p2 pat ^OpenSSH
 Enabling compatibility mode for protocol 2.0
 debug1: Local version string SSH-2.0-OpenSSH_2.5.2p2
 debug1: send KEXINIT
 debug1: done
 debug1: wait KEXINIT
 debug1: got kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-group1-
 debug1: got kexinit: ssh-rsa,ssh-dss
 debug1: got kexinit:
 debug1: got kexinit:
 debug1: got kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-,hmac-sha1-96,hmac-md5-96
 debug1: got kexinit: hmac-md5,hmac-sha1,hmac-ripemd160,hmac-,hmac-sha1-96,hmac-md5-96
 debug1: got kexinit: none,zlib
 debug1: got kexinit: none,zlib
 debug1: got kexinit:
 debug1: got kexinit:
 debug1: first kex follow: 0
 debug1: reserved: 0
 debug1: done
 debug1: kex: server->client aes128-cbc hmac-md5 none
 debug1: kex: client->server aes128-cbc hmac-md5 none
 debug1: Sending SSH2_MSG_KEX_DH_GEX_REQUEST.
 debug1: Wait SSH2_MSG_KEX_DH_GEX_GROUP.
 debug1: dh_gen_key: priv key bits set: 129/256
 debug1: bits set: 1022/2049
 debug1: Sending SSH2_MSG_KEX_DH_GEX_INIT.
 debug1: Wait SSH2_MSG_KEX_DH_GEX_REPLY.
 debug1: Got SSH2_MSG_KEXDH_REPLY.
 debug1: Host '' is known and matches the RSA host key.
 debug1: Found key in /home/web/.ssh/known_hosts2:1
 debug1: bits set: 1025/2049
 debug1: ssh_rsa_verify: signature correct
 debug1: Wait SSH2_MSG_NEWKEYS.
 debug1: send SSH2_MSG_NEWKEYS.
 debug1: done: send SSH2_MSG_NEWKEYS.
 debug1: done: KEX2.
 debug1: service_accept: ssh-userauth
 debug1: authentications that can continue: publickey,password
 debug1: next auth method to try is publickey
 debug1: try privkey: /home/web/.ssh/identity
 debug1: try privkey: /home/web/.ssh/id_dsa
 debug1: try pubkey: /home/web/.ssh/id_rsa
 debug1: Remote: Port forwarding disabled.
 debug1: Remote: X11 forwarding disabled.
 debug1: Remote: Agent forwarding disabled.
 debug1: Remote: Pty allocation disabled.
 debug1: input_userauth_pk_ok: pkalg ssh-dss blen 433 lastkey 0x8083a58 hint 2
 debug1: read SSH2 private key done: name rsa w/o comment success 1
 debug1: sig size 20 20
 debug1: Remote: Port forwarding disabled.
 debug1: Remote: X11 forwarding disabled.
 debug1: Remote: Agent forwarding disabled.
 debug1: Remote: Pty allocation disabled.
 debug1: ssh-userauth2 successful: method publickey
 debug1: fd 5 setting O_NONBLOCK
 debug1: fd 6 IS O_NONBLOCK
 debug1: channel 0: new [client-session]
 debug1: send channel open 0
 debug1: Entering interactive session.
 debug1: client_init id 0 arg 0
 debug1: Sending command: hostname
 debug1: channel 0: open confirm rwindow 0 rmax 16384
 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
 debug1: channel 0: rcvd eof
 debug1: channel 0: output open -> drain
 debug1: channel 0: rcvd close
 debug1: channel 0: input open -> closed
 debug1: channel 0: close_read
 debug1: channel 0: obuf empty
 debug1: channel 0: output drain -> closed
 debug1: channel 0: close_write
 debug1: channel 0: send close
 debug1: channel 0: is dead
 debug1: channel_free: channel 0: status: The following connections are open:
   #0 client-session (t4 r0 i8/0 o128/0 fd -1/-1)
 debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.0 seconds
 debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0
 debug1: Exit status 0

Notice the lines that are bold in this listing, and compare this output to yours.

If you are prompted for the password and you are using the root account, you did not set PermitRootLogin yes in the /etc/ssh/sshd_config file on the server you are trying to connect to. If you did not set the IP address correctly in authorized_keys2 file on the server, you may see an error such as the following:

 debug1: Remote: Your host '' is not permitted to use this key for login.

If you see this error, you need to specify the correct IP address in the authorized_keys2 file on the server you are trying to connect to. (You'll find this file is in the .ssh subdirectory under the home directory of the user you are using.)

Further troubleshooting information and error messages are also available in the /var/log/messages file and the /var/log/secure files.

Using the Secure Data Transport

Now that we have established a secure data transport between the SSH client and the SSH server, we can copy files, send commands, or create an interactive shell.

  • To copy files, use the scp command. Here is an example command to copy the file /etc/hosts on the SSH client to /tmp/hosts on the SSH server:

     #scp /etc/hosts
  • To send commands to the SSH server from the ssh client, use a command such as the following:

     #ssh who

    This executes the who command on the machine (the SSH server).

  • To start an interactive shell on the SSH server, enter the following command on the SSH client:


This logs you on to the SSH server and provides you with a normal, interactive shell. When you enter commands, they are executed on the SSH server. When you are done, type exit to return to your shell on the SSH client.

Improved Security

Once you have tested this configuration and are satisfied that the SSH client can connect to the SSH server, you can improve the security of your SSH client and SSH server. To do so, perform the following steps.

  1. Remove the interactive shell access for the web account on the SSH server (the backup server) by editing the authorized_keys2 file and add the no-pty option:

     #vi /home/web/.ssh/authorized_keys2
  2. Change the beginning of this one-line file to look like this:

     from="",no-X11-forwarding,no-agent-forwarding ssh-dss, no-pty
  3. Once you have finished with the rsync recipe shown later in this chapter, you can deactivate the normal user account password on the backup server (the SSH server) with the following command:

     #usermod -L web

    This command inserts an ! in front of the web user's encrypted password in the /etc/shadow file. (Don't use this command on the root account!)

rsynzc over SSH

Your distribution should already have the rsync program. Check to make sure it is installed on both systems with the command:

 #rpm -qa | grep rsync

If you do not have the rsync program installed on your system, you can load the rsync source code included on the CD-ROM with this book in the chapter4 subdirectory. (Copy the source code to a subdirectory on your system, untar it, and then type ./configure, make then make install.)

Now that we have a secure, encrypted method of transmitting data between our two cluster nodes, we can configure the rsync program to collect and ship data through this SSH transport. In the following steps, we'll create files in the /www directory on the SSH client and then copy these files to the SSH server using the rsync program and the web account.

  1. Log on to the SSH client (the primary server) as root, and then enter:

     #mkdir /www
     #mkdir /www/htdocs
     #echo "This is a test" > /www/htdocs/index.html
  2. Give ownership of all of these files to the web user with the command:

     #chown -R web /www
  3. Now, on the SSH server (the backup server), we need to create at least the /www directory and give ownership of this directory to the web user because the web user does not have permission to create directories at the root level of the drive (in the / directory). To do so, log on to the SSH server (the backup server) as root, and then enter:

     #mkdir /www
     #chown web /www
  4. At this point, we are ready to use a simple rsync command on the SSH client (our primary server) to copy the contents of the /www directory (and all subdirectories) over to the SSH server (the backup server). Log on to the SSH client as the web user (from any directory), and then enter:

     #rsync -v -a -z -e ssh --delete /www/

If you leave off the trailing / in /www/, you will be creating a new subdirectory on the destination system where the source files will land rather than updating the files in the /www directory on the destination system.

Because we use the -v option, the rsync command will copy the files verbosely. That is, you will see the list of directories and files that were in the local /www directory as they are copied over to the /www directory on the backup node.[8]

We are using these rsync options:


  • Tells rsync to be verbose and explain what is happening.


  • Tells rsync to copy all of the files and directories from the source directory.


  • Tells rsync to compress the data in order to make the network transfer faster. (Our data will be encrypted and compressed before going over the wire.)

-e ssh

  • Tells rsync to use our SSH shell to perform the network transfer. You can eliminate the need to type this argument each time you issue an rsync command by setting the RSYNC_RSH shell environment variable. See the rsync man page for details.


  • Tells rsync to delete files on the backup server if they no longer exist on the primary server. (This option will not remove any primary, or source, data.)


  • This is the source directory on the SSH client (the primary server) with a trailing slash (/). A trailing slash is required if you want rsync to copy the entire directory and its contents.

  • This is the destination IP address and destination directory.


To tell rsync to display on the screen everything it will do when you enter this command, without actually writing any files to the destination system, add a -n to the list of arguments.

If you now run this exact same rsync command again (#rsync -v -a -z -e ssh --delete /www/, rsync should see that nothing has changed in the source files and send almost nothing over the network connection, though it will still send a little bit of data to the rsync program on the destination system to check for disk blocks that have changed.

To see exactly what is happening when you run rsync, add the --stats option to the above command, as shown in the following example. Log on to the primary server (the SSH client) as the web user, and then enter:

 rsync -v -a -z -e ssh --delete --stats /www/

rsync should now report that the Total transferred file size is 0 bytes, but you will see that a few hundred bytes of data were written.

Copying a Single File with rsync

To copy just one file using rsync, enter:

 #rsync -v -a -z -e ssh /etc/sysconfig/iptables

Notice in this example that the destination directory is specified but that the destination file name is optional.

Here is how to copy one file from a remote machine ( to the local one:

 #rsync -v -a -z -e ssh /etc/sysconfig/iptables

In this example, we have specified the destination directory and the (optional) destination file name.

rsync over Slow WAN Connections

If your primary and backup server connect to each other via the Internet (or a slow connection such as a private T1 line), you may also want to add the --bwlimit to the rsync command. The number you specify following --bwlimit is the maximum number of kilobytes that rsync should try to send over the network per second. For example, to limit the rsync connection to 8 KB per second (or 1/24 of the total of a T1 connection), you could log on to the SSH client as the web user and enter:

 rsync -v -a -z -e ssh --delete --bwlimit=8 /www/

Scheduled rsync Snapshots

Once you are happy with your rsync configuration, you can add a cron job to your primary server that will perform the rsync operation at regular intervals. For example, to synchronize the data in the /www directory at 30 minutes past the hour, every hour, you would enter the following command while logged on to the primary server as the web account:

 #crontab -e

This should bring up the current crontab file for the web account in the vi editor.[9] You will see that this file is empty because you do not have any cron jobs for the web account yet. To add a job, press the I key to insert text and enter:

 30 * * * * rsync -v -a -z -e ssh --delete /www/ > /dev/null 2>&1

You can add a comment on a line by itself to this file by preceding the comment with the # character. For example, your file might look like this:

 # This is a job to synchronize our files to the backup server.
 30 * * * * rsync -v -a -z -e ssh --delete /www/

Then press ESC, followed by the key sequence :wq to write and quit the file.

We have added > /dev/null 2>&1 to the end of this line to prevent cron from sending an email message every half hour when this cron job runs (output from the rsync command is sent to the /dev/null device, which means the output is ignored). You may instead prefer to log the output from this command, in which case you should change /dev/null in this command to the name of a log file where messages are to be stored.

To cause this command to send an email message only when it fails, you can modify it as follows:

 rsync -v -a -z -e ssh --delete /www/ || echo "rsync failed"
   | mail

To view your cron job listing, enter:

 #crontab -l

This cron job will take effect the next time the clock reads thirty minutes after the hour.

The format of the crontab file is in section 5 of the man pages. To read this man page, type:

 #man 5 crontab

See Chapter 1 to make sure the crond daemon starts each time your system boots.

ipchains/iptables Firewall Rules for rsync and SSH

If your primary server uses its eth1 network interface to connect to the backup server, you can prevent unwanted (outside) attempts to use SSH or rsync with the following ipchains/iptables rules.


You can accomplish the same thing these rules are doing by using access rules in the sshd_config file.

SSH uses port 22:

 ipchains -A input -i eth1 -p tcp -s 1024:65535
   -d 22 -j ACCEPT


 iptables -A INPUT -i eth1 -p tcp -s --sport 1024:65535
   -d --dport 22 -j ACCEPT

rsync uses port 873:

 ipchains -A input -i eth1 -p tcp -s 1024:65535
   -d 873 -j ACCEPT


 iptables -A INPUT -i eth1 -p tcp -s --sport 1024:65535
   -d --dport 873 -j ACCEPT

[5]If you don't mind entering a passphrase each time your system boots, you can set a passphrase here and configure the ssh-agent for the web account. See the ssh-agent man page.

[6]Security experts cringe at the thought that you might try and do this paste operation via telnet over the Internet because a hacker could easily intercept this transmission and steal your encryption key.

[7]An additional option that could be used here is no-port-forwarding which, in this case, could mean that the web user on the SSH client machine cannot use the SSH transport to pass IP traffic destined for another UDP or IP port on the SSH server. However, if you plan to use the SystemImager software described in the next chapter, you need to leave port forwarding enabled on the SystemImager Golden Client

[8]If this command does not work, check to make sure the ssh command works as described in the previous recipe.

[9]This assumes that vi is your default editor. The default editor is controlled by the EDITOR shell environment variable.

[10]As we'll see later in these chapters, the configuration files for Heartbeat should always be the same on the primary and the backup server. In some cases, however, the Heartbeat program must be notified of the changes to its configuration files (service heartbeat reload) or even restarted to make the changes take effect (when the /etc/ha.d/haresources file changes).

In Conclusion

To build a highly available server pair that can be easily administered, you need to be able to copy configuration files from the primary server to the backup server when you make changes to files on the primary server. One way to copy configuration files from the primary server to the backup server in a highly available server pair is to use the rsync utility in conjunction with the Open Secure Shell transport.

It is tempting to think that this method of copying files between two servers can be used to create a very low-cost highly available data solution using only locally attached disk drives on the primary and the backup servers. However, if the data stored on the disk drives attached to the primary server changes frequently, as would be the case if the highly available server pair is an NFS server or an SQL database server, then using rsync combined with Open SSH is not an adequate method of providing highly available data. The data on the primary server could change, and the disk drive attached to the primary server could crash, long before an rsync cron job has the chance to copy the data to the backup server.[11] When your highly available server pair needs to modify data frequently, you should do one of the following things to make your data highly available:

  • Install a distributed filesystem such as Red Hat's Global File System (GFS) on both the primary and the backup server (see Appendix D for a partial list of additional distributed filesystems).

  • Connect both the primary and the backup server to a highly available SAN and then make the filesystem where data is stored a resource under Heartbeat's control.

  • Attach both the primary and the backup server to a single shared SCSI bus that uses RAID drives to store data and then make the filesystem where data is stored a resource under Heartbeat's control.

When you install the Heartbeat package on the primary and backup server in a highly available server pair and make the shared storage file system (as described in the second two bullets) where your data is stored a resource under Heartbeat's control, the filesystem will only be mounted on one server at a time (under normal circumstances, the primary server). When the primary server crashes, the backup server mounts the shared storage filesystem where the data is stored and launches the SQL or NFS server to provide cluster nodes with access to the data. Using a highly available server pair and shared storage in this manner ensures that no data is lost as a result of a failover.

I'll describe how to make a resource highly available using the Heartbeat package later in this part of the book, but before I do, I want to build on the concepts introduced in this chapter and describe how to clone a system using the SystemImager package.

[11]Using rsync to make snapshots of your data may, however, be an acceptable disaster recovery solution.

Часть 5: Клонирование системы с помощью Systemimager

Now that we have a secure and reliable way to transfer files between computers (the subject of Chapter 4), we can make a backup copy, or system image, of one computer and store it on the disk drive of another computer. This system image can then be used to recover a system in the event of a disk crash, or it can be used to make a clone of the original system. Cloned systems can become backup servers in high-availability server pairs, or they can become cluster nodes. When changes are made to the original system, the cloned systems can be easily updated using a newly created system image of the original system.


Using the SystemImager package, we can turn one of the cluster nodes into a Golden Client that will become the master image used to build all of the other cluster nodes. We can then store the system image of the Golden Client on the backup server (offering the SSH server service discussed in the previous chapter). In this chapter, the backup server is called the SystemImager server, and it must have a disk drive large enough to hold a complete copy of the contents of the disk drive of the Golden Client, as shown in Figure 5-1.

Image from book
Figure 5-1: The SystemImager server and Golden Client

Cloning the Golden Client with SystemImager

Clones 1, 2, and 3 in this diagram are all exact copies of the Golden Client— only their IP address information and host names are different.

Here, then, are the steps for cloning a system using the SystemImager package.


The following recipe and the software included on the CD-ROM do not contain support for SystemImager over SSH. To use SystemImager over SSH (on RPM-based systems) you must download and compile the SystemImager source code as described at

SystemImager Recipe

The list of ingredients follows. See Figure 5-2 for a list of software used on the Golden Client, SystemImager server, and clone(s).

Image from book
Figure 5-2: Software used on SystemImager server, Golden Client, and clone(s)
  • Computer that will become the Golden Client (with SSH client software and rsync)

  • Computer that will become the SystemImager server (with SSH server software and rsync)

  • Computer that will become a clone of the Golden Client (with a network connection to the SystemImager server and a floppy disk drive)

Install the SystemImager Server Software on the SystemImager Server

The machine you select to be the SystemImager server should have enough disk space to hold several system images. After you decide which machine to use, you can install the software using the installation program available on the download page of the SystemImager website,, or you can install the software using the CD-ROM included with this book. Before we can install the software, however, we need to install a few Perl modules that SystemImager depends upon for normal operation: AppConfig, MLDBM, and XML-Simple. The source code for these three Perl modules is included on the CD-ROM in tar.gz format in the chapter5/perl-modules directory. Copy them to a directory on the hard drive (/usr/local/src/ systemimager/perlmodules in the following example), and compile and install them with the commands perl Makefile.PL, make, make test, and make install.

When you have finished installing these Perl modules, you're ready to install the SystemImager server package. If you are using an RPM-based distribution such as Red Hat, you can use the RPM files included with this book. Mount the CD-ROM, switch to the chapter5 directory, and then type:

 #rpm -ivh *rpm

Only the RPM files for i386 hardware are included on the CD-ROM. For other types of hardware, see the SystemImager website.

The RPM command may complain about failed dependencies. If the only dependencies it complains about are the Perl modules we just installed (AppConfig, which is sometimes called libappconfig, MLDBM, and XML-Simple), you can force the RPM command to ignore these dependencies and install the RPM files with the command:

 #rpm -ivh --nodeps *rpm

Using the Installer Program From SystemImager

If you just installed the software using the CD-ROM included with this book, you can skip this section.

The list of software installed by the SystemImager install program is currently located (and downloaded each time the script runs) at It contains URLs to locate the common software (common to both the SystemImager client and the SystemImager server) required on the SystemImager server, and the server software. You will also see different versions of the kernel boot architecture rpms. The install script will download only the packages you specify and any dependencies they have that are also in the list (--list). For all versions of i86 hardware, the i386 boot rpm is used. You can download all available boot architectures, but normally you will only need one (to support the architecture you are using on all cluster nodes). The commands to list the available SystemImager packages are as follows:

 #chmod +x install
 #./install -list
 Then install the packages you need using a command like this:
 #./install -verbose systemimager-common systemimager-server systemimager-

The installer program runs the rpm command to install the packages it downloads.

Install the SystemImager Client Software on the Golden Client

Also included on the CD-ROM with this book is the SystemImager client software. This software is located in the chapter5/client directory.


As with all of the software in this book, you can download the latest version of the SystemImager package from the Internet.

Before you can install this software, you'll need to install the Perl AppConfig module, as described in the previous step (it is located on the CD-ROM in the chapter5/perl-modules directory). Again, because the RPM program expects to find dependencies in the RPM database, and we have installed the AppConfig module from source code, you can tell the rpm program to ignore the dependency failures and install the client software with the following commands.

 #mount /mnt/cdrom
 #cd /mnt/cdrom/chapter5/client  #rpm -ivh --nodeps *

You can also install the SystemImager client software using the installer program found on the SystemImager download website (

Once you have installed the packages, you are ready to run the command:

 #/usr/sbin/prepareclient -server <servername>

This script will launch rsync as a daemon on this machine (the Golden Client) so that the SystemImager server can gain access to its files and copy them over the network onto the SystemImager server. You can verify that this script worked properly by using the command:

 #ps -elf | grep rsync

You should see a running rsync daemon (using the file /tmp/rsyncd.conf as a configuration file) that is waiting for requests to send files over the network.

Create a System Image of the Golden Client on the SystemImager Server

We are now ready to copy the contents of the Golden Client's disk drive(s) to the SystemImager server. The images will be stored on the SystemImager server in the /var/lib/systemimager directory. Check to make sure you have enough disk space in this directory by doing the following:

 #df -h /var/lib/systemimager

If this is not enough space to hold a copy of the contents of the disk drive or drives on the Golden Client, use the -directory option when you use the getimage command, or change the default location for images in the /etc/systemimager/systemimager.conf file and then create a link back to it, ln -s /usr/local/systemimager /var/lib/systemimager.

Now (optionally, if you are not running the command as root) give the web user[1] permission to run the /usr/sbin/getimage command by editing the /etc/sudoers file with the visudo command and adding the following line:

 web = NOPASSWD:/usr/sbin/getimage

where is the host name (see the output of the hostname command) of the SystemImager server.

We can now switch to the web user account on the SystemImager server and run the /usr/sbin/getimage command.

Logged on to the SystemImager server:

 #sudo /usr/sbin/getimage -user-ssh web -golden-client -image

In this example, the Golden Client is at IP address

If you are logged on as root and do not need to use the SSH protocol to encrypt your network traffic, use the following command:

 #/usr/sbin/getimage -golden-client -image
  • The -image name does not necessarily need to match the host name of the Golden Client, though this is normally how it is done.

  • You cannot get the image of a Golden Client unless it knows the IP address and host name of the image server (usually specified in the /etc/ hosts file on the Golden Client).

  • On busy networks, the getimage command may fail to complete properly due to timeout problems. If this happens, try the command a second or third time. If it still fails, you may need to run it at a time of lower network activity.

  • When the configuration changes on the backup server, you will need to run this command again, but thanks to rsync, only the disk blocks that have been modified will need to be copied over the network. (See "Performing Maintenance: Updating Clients" at the end of this chapter for more information.)

Once you answer the initial questions and the script starts running, it will spend a few moments analyzing the list of files before the copy process begins. You should then see a list of files scroll by on the screen as they are copied to the local hard drive on the SystemImager server.

When the command finishes, it will prompt you for the method you would like to use to assign IP addresses to future client systems that will be clones of the backup data server (the Golden Client). If you are not sure, select static for now.

When prompted, select y to run the addclients program (if you select n you can simply run the addclients command by itself later).

You will then be prompted for your domain name and the "base" host name you would like to use for naming future clones of the backup data server. As you clone systems, the SystemImager software automatically assigns the clones host names using the name you enter here followed by a number that will be incremented each time a new clone is made (backup1, backup2, backup3, and so forth in this example).

The script will also help you assign an IP address range to these host names so they can be added to the /etc/hosts file. When you are finished, the entries will be added to the /etc/hosts file (on the SystemImager server) and will be copied to the /var/lib/systemimager/scripts/ directory. The entries should look something like this: backup1 backup2 backup3 backup4 backup5 backup6

and so on, depending upon how many clones you plan on adding.

Make the Primary Data Server into a DHCP Server

Normally, you will skip this section and use the static method to assign IP addresses (so that you can skip this section entirely by placing the IP address of the client on the boot floppy created in the next section). If you already use DHCP or don't want to install a DHCP server on your network, skip to the next section now.

First, we need to be sure the DHCPD and TFTPD software is loaded on the SystemImager server. On a Red Hat system, you can do this with the following commands:

 #rpm -qa | grep dhcp

You may have the DHCP client software installed on your computer. The DHCP client software RPM package is called dhcpcd, but we need the dhcp package.

If this command does not return the version number of the currently installed DHCP software package, you will need to install it either from your distribution CD or by downloading it from the Red Hat (or Red Hat mirror) FTP site (or using the Red Hat up2date command).


As of Red Hat version 7.3, the init script to start the dhcpd server (/etc/rc.d/init.d/dhcpd) is contained in the dhcp RPM.

We are now ready to modify the DHCPD configuration file /etc/ dhcpd.conf. Fortunately, the SystemImager package includes a utility called mkdhcpserver,[2] which will ask questions and then do this automatically.

As root on the SSH server, type the command:


Once you have entered your domain name, network number, and netmask, you will be asked for the starting IP address for the DHCP range. This should be the same IP address range we just added to the /etc/hosts file.

In this example, we will not use SSH to install the clients.[3]

The IP address of the image server will also be the cluster network internal IP address of the primary data server (the same IP address as the default gateway):

 Here are the values you have chosen:
 DNS domain name:                 
 Network number:                  
 Starting IP address for your DHCP range:
 Ending IP address for your DHCP range:
 First DNS server:
 Second DNS server:
 Third DNS server:
 Default gateway:                 
 SSH files download URL:
 Are you satisfied? (y/[n]):

If all goes well, the script should ask you if you want to restart the DHCPD server. You can say no here because we will do it manually in a moment.

View the /etc/dhcpd.conf file to see the changes that this script made by typing:

 #vi /etc/dhcpd.conf

Before we can start DHCPD for the first time, however, we need to create the leases file. This is the file that the DHCP server uses to hold a particular IP address for a MAC address for a specified lease period (as controlled by the /etc/dhcpd.conf file). Make sure this file exists by typing the command:

 #touch /var/lib/dhcp/dhcpd.leases

Now we are ready to stop and restart the DHCP server, but as the script warns, we need to make sure it is running on the proper network interface card. Use the ifconfig -a command and determine what the eth number is for your cluster network (the 10.1.1.x network in this example).

If you have configured your primary data server as described in this chapter, you will want DHCP to run on the eth1 interface, so we need to edit the DHCPD startup script and modify the daemon line as follows.

 #vi /etc/rc.d/init.d/dhcpd

Change the line that reads:

 daemon /usr/sbin/dhcpd

so it looks like this:

 daemon /usr/sbin/dhcpd eth1

Also, (optionally) you can configure the system to start DHCP automatically each time the system boots (see Chapter 1). For improved security and to avoid conflicts with any existing DHCP servers on your network, however, do not do this (just start it up manually when you need to clone a system).

Now manually start DHCPD with the command:

 #/etc/init.d/dhcpd start

(The command service dhcpd start does the same thing on a Red Hat system.)

Make sure the server starts properly by checking the messages log with the command:

 #tail /var/log/messages

You should see DHCP reporting the "socket" or network card and IP address it is listening on with messages that look like this:

 dhcpd: Listening on Socket/eth1/
 dhcpd: Sending on Socket/eth1/
 dhcpd: dhcpd startup succeeded

Create a Boot Floppy for the Golden Client

We are now ready to create a floppy diskette that can be used to boot a new system and automatically copy over the entire system image to the hard drive. Create a new Linux[4] boot floppy on the primary data server by placing a blank floppy in the drive and typing the SystemImager command[5]:


We will not use the SSH protocol to install the Golden Client for the remainder of this chapter. If you want to continue to use SSH to install the Golden Client (not required to complete the install), see the help page information on the mkautoinstalldiskette command.

If, after the script formats the floppy, you receive the error message stating that:

 mount: fs type msdos not supported by kernel

you need to recompile your kernel with support for the MSDOS filesystem.[6]


The script will use the superformat utility if you have the fd-utils package installed on your computer (but this package is not necessary for the mkautoinstalldiskette script to work properly).

If you did not build a DHCP server in the previous step, you should now install a local.cfg file on your floppy.[7] The local.cfg file contains the following lines:

 # "SystemImager" - Copyright (C) Brian Elliott Finley <>
 # This is the /local.cfg file.

Create this file using vi on the hard drive of the image server (the file is named /tmp/local.cfg in this example), and then copy it to the floppy drive with the commands:

 #mount /mnt/floppy
 #cp /tmp/local.cfg /mnt/floppy
 #umount /mnt/floppy

When you boot the new system on this floppy, it will use a compact version of Linux to format your hard drive and create the partitions to match what was on the Golden Client. It will then use the local.cfg file you installed on the floppy, or if one does not exist, the DHCP protocol to find the IP address it should use. Once the IP address has been established on this new system, the install process will use the rsync program to copy the Golden Client system image to the hard drive.

Also, if you need to make changes to the disk partitions or type of disk (IDE instead of SCSI, for example), you can make changes to the script file that will be downloaded from the image server and run on the new system by modifying the script located in the scripts directory (defined in the /etc/systemimager/systemimager.conf file).[8] The default setting for this directory path is /var/lib/systemimager/scripts.[9] This directory should now contain a list of host names (that you specified when the addclients script in Step 3 of this recipe). All of these host-name files point to a single (real) file containing a script that will run on each clone machine at the start of the cloning process. So in this example, the directory listing would look like this:

 #ls -ls /var/lib/systemimager/scripts/backup*

The output of this command looks like this:

 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->
 /var/lib/systemimager/ ->

Notice how all of these files are really symbolic links to the last file on the list:


This file contains the script, which you can modify if you need to change anything about the basic drive configuration on the clone system. It is possible, for example, to create a Golden Client that uses a SCSI disk drive and then modify this script file (changing the sd to hd and making other modifications) and then install the clone image onto a system with an IDE hard drive. (The script file contains helpful comments for making disk partition changes.)

This kind of modification is tricky, and you need the right type of kernel configuration (the correct drivers) on the original Golden Client to make it all work, so the easiest configuration by far is to match the hardware on the Golden Client and any clones you create.

Start rsync as a Daemon on the Primary Data Server

You'll need to start rsync as a daemon on the SystemImager server and point it at the rsync configuration file created and maintained by the SystemImager programs in the /etc/systemimager directory. The SystemImager installation procedure should have created the init script /etc/init.d/systemimager-server-rsyncd. This script starts rsync as a daemon (rsync stays running in the background waiting for the cloning processes to start on a new system) when you enter the command:

 #/etc/init.d/systemimager-server-rsyncd start

(Or service systemimager-server-rsyncd start on Red Hat systems.)

You can also start rsync manually with the command:

 #rsync --daemon --config=/etc/systemimager/rsyncd.conf

rsync gets its configuration from the file /etc/systemimager/rsyncd.conf. This file tells the rsync daemon to report its error messages to the file /var/ log/systemimager/rsyncd. Examine this log now with the command:

 #tail -f /var/log/systemimager/rsyncd

This should show log file entries indicating that rsync was successfully started (press CTRL-C to break out of this command).

Install the Golden Client System Image on the New Clone

Connect your new clone system to the network and boot it from the floppy disk you just created. After it boots from this floppy, it will use the local.cfg file to set its IP address information or contact the DHCP server and receive the next IP address you have configured for your clones.

This newly connected virginal clone machine will then use an rsync command such as the following to pull down the Golden Client system image:

 rsync -av -bwlimit=10000 -numeric-ids /a/

This command is asking the machine (the SystemImager server) to send all of the files it has for the image. How does rsync running on the SystemImager server know which files to send? It examines the /etc/systemimager/rsyncd.conf file for an entry such as the following.

     path = /var/lib/systemimager/images/

This allows the rsync program to resolve the rsync module name backup. into the path /var/lib/systemimager/images/ to find the correct list of files.

You can watch what is happening during the cloning process by running the tcpdump command on the server (see Appendix B) or with the command:

 #tail -f /var/log/systemimager/rsyncd

To see the network queues, type:

 #netstat -tn

When rsync is finished copying the Golden Client image to the new system's hard drive, the installation process runs System Configurator to customize the boot image (especially the network information) that is now stored on the hard drive so that the new system will boot with its own host and network identity.

When the installation completes, the system will begin beeping to indicate it is ready to have the floppy removed and the power cycled. Once you do this, you should have a newly cloned system.

Post-Installation Notes

The system cloning process modifies the Red Hat network configuration file /etc/sysconfig/network. Check to make sure this file contains the proper network settings after booting the system. (If you need to make changes, you can rerun the network script with the command service network restart, or better yet, test with a clean reboot.)

Check to make sure the host name of your newly cloned host is in its local /etc/host file (the LPRng daemon called lpd, for example, will not start if the host name is not found in the hosts file).

Configure any additional network interface cards. On Red Hat systems, the second Ethernet interface card is configured using the file /etc/sysconfig/network-scripts/ifcfg-eth1, for example.

If you are using SSH, you need to create new SSH keys for local users (see the ssh-keygen command in the previous chapter) and for the host (on Red Hat systems, this is accomplished by removing the /etc/ssh/ssh_host* files). Note, however, the simplest and easiest configuration to administer is one that uses the same SSH keys on all cluster nodes. (When a client computer connects to the cluster using SSH, they will always see the same SSH host key, regardless of which cluster node they connect to, if all cluster nodes use the same SSH configuration.)

If you started a DHCP server, you can turn it back off until the next time you need to clone a system:

 #/etc/rc.d/init.d/dhcpd stop

Finally, you will probably want to kill the rsync daemon started by the prepareclient command on the Golden Client.

[1]Or any account of your choice. The command to create the web user was provided in Chapter 4.

[2]Older versions of SystemImager used the command makedhcpserver.

[3]If you really need to create a newly cloned system at a remote location by sending your system image over the Internet, see the SystemImager manual online at for instructions.

[4]This boot floppy will contain a version of Linux that is designed for the SystemImager project, called Brian's Own Embedded Linux or BOEL. To read more about BOEL, see the following article at (

[5]Older versions of SystemImager used the command makeautoinstalldiskette.

[6]In the make menuconfig screen, this option is under the Filesystems menu. Select (either as M for modular or * to compile it directly into the kernel) the DOS FAT fs support option and the MSDOS fs support option will appear. Select this option as well; then recompile the kernel (see Chapter 3).

[7]You can also do this when you execute the mkautoinstalldiskette command. See mkautoinstalldiskette -help for more information.

[8]See "How do I change the disk type(s) that my target machine(s) will use?" in the SystemImager FAQ for more information.

[9]/tftpboot/systemimager in older versions of SystemImager.

Performing Maintenance: Updating Clients

If you make changes to the Golden Client, simply enter the getimage command on the SystemImager server; then updateclient on the cloned systems (the backup servers in the high-availability server pairs or the cluster nodes inside the cluster).


If you need to exclude certain directories from an update, create an entry in /etc/systemimager/updateclient.local.exclude.


The SystemInstaller project grew out of the Linux Utility for cluster Installation (LUI) project and allows you to install a Golden Image directly to a SystemImager server using RPM packages. SystemInstaller does not use a Golden Client; instead, it uses RPM packages that are stored on the SystemImager server to build an image for a new clone, or cluster node.[10]

The SystemInstaller website is currently located at

SystemInstaller uses RPM packages to make it easier to clone systems using different types of hardware (SystemImager works best on systems with similar hardware).

System Configurator

As we've previously mentioned, SystemImager uses the System Configurator to configure the boot loader, loadable modules, and network settings when it finishes installing the software on a clone. The System Configurator software package was designed to meet the needs of the SystemImager project, but is distributed and maintained separate from the SystemImager package with the hope that it will be used for other applications. For example, System Configurator could be used by a system administrator to install driver software and configure the NICs of all cluster nodes even when the cluster nodes use different model hardware and different Linux distributions.

For more information about System Configurator, see

System Installation Suite, SystemInstaller, SystemImager, and System Configurator are collectively called the System Installation Suite. The goal of the System Installation Suite is to provide an operating system independent method of installing, maintaining, and upgrading systems.

For more information on the System Installation Suite, see

[10]This is the method used by the OSCAR project, by the way. SystemInstaller runs on the "head node" of the OSCAR cluster.

In Conclusion

The SystemImager package uses the rsync software and (optionally) the SSH software to copy the contents of a system, called the Golden Client, onto the disk drive of another system, called the SystemImager server. A clone of the Golden Client can then be created by booting a new system off of a specially created Linux boot floppy containing software that formats the locally attached disk drive(s) and then pulls the Golden Client image off of the SystemImager server.

This cloning process can be used to create backup servers in high-availability server pairs, or cluster nodes. It can also be used to update cloned systems when changes are made to the Golden Client.

Часть 6: Введение в Heartbeat


This chapter introduces a software package called Heartbeat that gives you the ability to failover a resource from one computer to another. The following three chapters will explore Heartbeat and high-availability techniques in detail. Part III of this book will build on the techniques described here, and it will explain how to build a highly available cluster. Once you know how to use Heartbeat properly, you can deploy just about any service and make it highly available.


A highly available system has no single points of failure. A single point of failure is a single system component that upon failing causes the whole system to fail.

Heartbeat works like this: you tell Heartbeat which computer owns a particular resource (which computer is the primary server), and the other computer will automatically be the backup server. You then configure the Heartbeat daemon running on the backup server to listen to the "heartbeats" coming from the primary server. If the backup server does not hear the primary server's heartbeat, it initiates a failover and takes ownership of the resource.

The Physical Paths of the Heartbeats

The Heartbeat program running on the backup server can check for heartbeats coming from the primary server over the normal Ethernet network connection, but normally Heartbeat is configured to work over a separate physical connection between the two servers. This separate physical connection can be either a serial cable or another Ethernet network connection (via a crossover cable[1] or mini hub, for example).

Heartbeat will work over one or more of these physical connections at the same time and will consider the primary node active as long as heartbeats are received on at least one of the physical connections. Figure 6-1 shows three physical connections, or paths, between the servers. The first path, the normal Ethernet network used to connect systems to each other on the network, is the least preferred for sending the heartbeats, because it will add extra traffic to your network (though this is a trivial load under normal circumstances). Your choice of whether to use one or more new serial or Ethernet connections will depend on your situation.

Image from book
Figure 6-1: Physical paths for heartbeats

High-availability best practices dictate that heartbeats should travel over multiple independent communication paths.[2] This helps eliminate the communications path from being a single point of failure.

Serial Cable Connection

A serial connection is slightly more secure than an Ethernet connection, because a hacker will not be able to run telnet, ssh, or rlogin over the serial cable if they break into one of the systems. (The serial cable is a simple crossover cable connected to the COM port on each system.) However, because serial cables are short,[3] the servers must be located near each other, usually in the same computer room.

Ethernet Cable Connection

Using a new Ethernet network (or Ethernet crossover cable) eliminates any distance limitation between the servers. It also allows you to synchronize the filesystems on the two servers (as described in Chapter 4) without placing any extra network traffic on your normal Ethernet network.

Using two physical paths to connect the primary and backup servers provides redundancy for heartbeat control messages and is therefore a requirement of a no-single-point-of-failure configuration. The two physical paths between the servers need not be of the same type; an Ethernet and a serial connection can be used together in the same configuration.

Partitioned Clusters and STONITH

For true redundancy, two physical connections should carry heartbeat control messages between the primary and backup server. These two physical connections will help prevent a situation where a network or cable failure causes both nodes to try and assume ownership of the same resources. This condition is known as a split-brain or partitioned cluster,[4] and it can have dire consequences if you are using two heartbeat nodes to control one physical device (such as a shared SCSI or Fibre Channel disk drive). To avoid this situation, take the following precautions:

  • Create a redundant, reliable physical connection between heartbeat nodes (preferably using both a serial connection and an Ethernet connection) to carry heartbeat control messages.

  • Allow for the ability to forcibly shut down one of the heartbeat nodes when a partitioned cluster is detected.

This second precaution has been dubbed "shoot the other node in the head," or STONITH. Using a special hardware device that can power off a node through software commands (sent over a serial or network cable), Heartbeat can implement a Stonith[5] configuration designed to avoid cluster partitioning. (See Chapter 9 for more information.)


It is difficult to guarantee exclusive access to resources and avoid split-brain conditions when the primary and backup heartbeat servers are not in close proximity. You will very likely reduce resource reliability and increase system administration headaches if you try to use Heartbeat as part of your disaster recovery or business resumption plan over a wide area network (WAN). In the cluster solution described in this book, no Heartbeat pairs of servers need communicate over a WAN.

[1]A crossover cable is simpler and more reliable than a mini hub because it does not require external power.

[2]In Blueprints for High Availability, Evan Marcus and Hal Stern define three different types of networks used in failover configurations: the Heartbeat network, the production network (for client access to cluster resources), and an administrative network (for system administrators to access the servers and do maintenance tasks).

[3]The original EIA-232 specification did not specify a distance limitation, but 50 feet has become the industry's de facto distance limit for normal serial communication. See the Serial HOWTO for more information.

[4]Sometimes the term cluster fencing or i/o fencing is used to describe what should happen when a cluster is partitioned. It means that the cluster must be able to build a fence between partitions and decide on which side of the fence the cluster resources should reside.

[5]Although an acronym at birth, this term has moved up in stature to become, by all rights, a word—it will be used as such throughout the remainder of this book.

Heartbeat Control Messages

In this chapter we will look at the three most basic heartbeat control messages[6] (three kinds of packets, if you are using an Ethernet network):

  • Heartbeats or status messages

  • Cluster transition messages

  • Retransmission requests


Heartbeats (sometimes called status messages) are broadcast, unicast, or multicast packets that are only about 150 bytes long. You control how often each computer broadcasts its heartbeat and how long the heartbeat daemon running on another node should wait before assuming something has gone wrong.

Cluster Transition Messages

The two most prevalent cluster transition messages are ip-request and ip-request-resp. These messages are relatively rare and contain the conversation between heartbeat daemons when they want to move a resource from one computer to another.

When you repair the primary server and it comes back online, it uses ip-request to ask the backup server to release the resource it took ownership of when the primary server failed. The backup server then shuts off the service and replies with an ip-request-resp message to inform the primary server that it no longer owns the resource. When the primary server receives this ip-request-resp, it starts up the service and offers it to the client computers again (it takes back ownership of the resource).

Retransmission Requests

The rexmit-request (or ns_rexmit) message—a request for a retransmission of a heartbeat control message—is issued when one of the servers running heartbeat notices that it is receiving heartbeat control messages that are out of sequence. (Heartbeat daemons use sequence numbers to ensure packets are not dropped or corrupted.) Heartbeat daemons will only ask for a retransmission of a heartbeat control message (with no sequence number, hence the "ns" in ns_rexmit) once every second, to avoid flooding the network with these retry requests when something goes wrong.

Ethernet Heartbeat Control Messages

All three of these heartbeat control messages are sent using the UDP protocol to either the port number specified in the /etc/ha.d/ file, or to the multicast address specified in this same configuration file (when using Ethernet).

Currently Heartbeat does not support more than two nodes.[7] More than one pair of heartbeat servers can share the same Ethernet network connection and exchange heartbeats and heartbeat control messages, but each of these pairs of heartbeat servers must use a unique UDP port number as specified in the /etc/ha.d/ file, or a unique unicast or multicast address.

Security and Heartbeat Control Messages

In addition to using a numbering sequence to recover from dropped or corrupted packets, Heartbeat digitally signs each packet using either a 128-bit hashing algorithm called MD5 (see RFC[8] 1321), or the even more secure 160-bit HMAC-SHA1 (see RFC 2104). (You enter the same encryption password for either of these methods on both the primary and the backup heartbeat nodes.)


The Heartbeat developers recommend that you use one of these encryption methods even on private networks to protect Heartbeat from an attacker (spoofed packets or a packet replay attack).

[6]For a complete list of heartbeat control message types see this website:

[7]To be more accurate, Heartbeat will only allow two nodes to share haresources entries (see Chapter 8 for more information about the proper use of the haresources file). In this book we are focusing on building a pair of high-availability LVS-DR directors to achieve high-availability clustering.

[8]The Requests for Comments (RFCs) can be found at

How Client Computers Access Resources

Normally client computers know the name of the server offering the resource they want to use ( or, for example), and they use the Domain Name System (DNS) to look up the proper IP address of the server. Once the routing of the packet through the Internet or WAN is done, however, this IP address has to be converted into a physical network card address called the Media Access Control (MAC) address.

At this point, the router or the locally connected client computers use the Address Resolution Protocol (ARP) to ask: "Who owns this IP address?" When the computer using the IP address responds, the router or client computer adds the IP address and its corresponding MAC address to a table in its memory called the ARP table so it won't have to ask each time it needs to use the IP address. After a few minutes, most computers let unused ARP table entries expire, to make sure they do not hold unused (or inaccurate) addresses in memory.[9]

Failover using IP Address Takeover (IPAT)

In order to move a resource (a service or daemon and its associated IP address) from one computer to another, we need a way to move the IP address from the primary computer to the backup computer. The method normally used by Heartbeat is called IP address takeover (sometimes called IPAT). To accomplish IPAT, Heartbeat uses secondary IP addresses (formerly called IP aliases in older Linux kernels) and gratuitous ARP broadcasts.

[9]For more information see RFC 826, "Ethernet Address Resolution Protocol."

Secondary IP Addresses and IP Aliases

Secondary IP addresses and IP aliases are two different methods for adding multiple IP addresses to the same physical network card. The first IP address (also called the primary address) is added to the NIC at boot time. Additional IP addresses are then added by Heartbeat based on entries in the haresources configuration file (we'll discuss this file in detail in Chapter 8)—additional IP addresses are either IP aliases or secondary IP addresses. As of this writing, the Linux kernel supports both IP aliases and secondary IP addresses, though IP aliases are deprecated in favor of secondary IP addresses.


IP aliasing (sometimes called network interface aliasing) is a standard feature of the Linux kernel when standard IPv4 networking is configured in the kernel. Older versions of the kernel required you to configure IP alias support as an option in the kernel.

By using secondary IP addresses or IP aliases you can offer a service such as sendmail on one IP address and offer another service, like HTTP, on another IP address, even though these two IP addresses are really owned by the same computer (one physical network card at one MAC address).

When you use Heartbeat to offer services on a secondary IP address (or IP alias) the service is owned by the server (meaning the server is the active, or primary, node) and the server also owns the IP addresses used to access the service. The backup node must not be using this secondary IP address (or IP alias). When the primary node fails, and the service should be offered by the backup server, the backup server will not only need to start the service or daemon, it will also need to add the proper secondary IP address or IP alias to one of its network cards.

A diagram of this two-node Heartbeat cluster configuration using an Ethernet network to carry the heartbeat packets is shown in Figure 6-2.

Image from book
Figure 6-2: A basic Heartbeat configuration

In Figure 6-2 the IP address is the primary IP address of the primary server, and it never needs to move to the backup server. The backup server's primary IP address is, and this IP address will likewise never need to move to another network card. However, the IP addresses and are each associated with a particular service running on the primary server. If the primary server goes down, these IP addresses need to move to the backup server, as shown in Figure 6-3.

Image from book
Figure 6-3: The same basic Heartbeat configuration after failure of the primary server

Ethernet NIC Device Names

The Linux kernel assigns the physical Ethernet interfaces names such as eth0, eth1, eth2, and so forth. These names are assigned either at boot time or when the Ethernet driver for the interface is loaded (if you are using a modular kernel), and they are based on the configuration built during the system installation in the file /etc/modules.conf (or /etc/conf.modules on older versions of Red Hat Linux). You can use the following command to see the NIC driver, interrupt (IRQ) address, and I/O (base) address assigned to each PCI network interface card (assuming it is a PCI card) that was recognized during the boot process:

 #lspci -v | less

You can then check this information against the interface configuration using the ifconfig command:

 #ifconfig -a | less

This should help you determine which eth number is assigned to a particular physical network card by the kernel eth numbering scheme. (See Appendix C for more information about NICs.)

Secondary IP Address Names

Secondary IP addresses are added after a primary IP address has already been configured for a NIC. The primary IP address can be assigned to a NIC using the ifconfig command (this is normally how Linux distributions assign IP addresses at boot time) or by using the ip command. Secondary IP addresses, however, can only be added using the ip command. When Heartbeat needs to add a secondary IP address to a NIC it uses the script IPaddr2 (included with the Heartbeat distribution) to run the proper ip command.

Both the primary and the secondary IP addresses can be viewed with this command:

 #ip addr sh

Secondary IP addresses are not shown by the ifconfig command.

Creating and Deleting Secondary IP Addresses with the ip Command

Creating and deleting secondary IP addresses in Linux is easy. To create (add) a secondary IP address for the eth0 NIC, use this command:

 #ip addr add broadcast dev eth0

In this example, we are assuming that the eth0 NIC already has an IP address on the network, and we are adding as an additional (secondary) IP address associated with this same NIC. To view the IP addresses configured for the eth0 NIC with the previous command, enter this command:

 #ip addr sh dev eth0

To remove (delete) this secondary IP address, enter this command:

 #ip addr del broadcast dev eth0

The ip command is provided as part of the IProute2 package. (The RPM package name is iproute.)

Fortunately, you won't need to enter these commands to configure your secondary IP addresses—Heartbeat does this for you automatically when it starts up and at failover time (as needed) using the IPaddr2 script.

IP Aliases

As previously mentioned, you only need to use one method for assigning IP addresses under Heartbeat's control: secondary IP addresses or IP aliases. If you are new to Linux, you should use secondary IP addresses as described in the previous sections and skip this discussion of IP aliases.

You add IP aliases to a physical Ethernet interface (that already has an IP address associated with it) by running the ifconfig command. The alias is specified by adding a colon and a number to the interface name. The first IP alias associated with the eth0 interface is called eth0:0, the second is called eth0:1, and so forth. Heartbeat uses the IPaddr script to create IP aliases.

Creating and Deleting IP Aliases with the ifconfig Command

In our previous example, we were using IP address with a network mask of on the eth0 Ethernet interface. To manually add IP alias to the same interface, use this command:

 #ifconfig eth0:0 netmask up

You can then view the list of IP addresses and IP aliases by typing the following:

 #ifconfig -a

or simply


This command will produce a listing that looks like the following:

 eth0       Link encap:Ethernet su HWaddr 00:99:5F:0E:99:AB
               inet addr: Bcast: Mask:
               UP BROADCAST RUNNING  MTU:1500 Metric:1
               RX packets:976 errors:0 dropped:0 overruns:0 frame:0
               TX packets:730 errors:0 dropped:0 overruns:0 carrier:0
               collisions:0 txqueuelen:100
               Interrupt:11 Base address:0x1400
 eth0:0     Link encap:Ethernet HWaddr 00:99:5F:0E:99:AB
               inet addr:  Bcast: Mask:
               UP BROADCAST RUNNING  MTU:1500  Metric:1
               Interrupt:11 Base address:0x1400

From this report you can see that the MAC addresses (called HWaddr in this report) for eth0 and eth0:0 are the same. (The interrupt and base addresses also show that these IP addresses are associated with the same physical network card.) Client computers locally connected to the computer can now use an ARP broadcast to ask "Who owns IP address" and the primary server will respond with "I own IP on MAC address 00:99:5F:0E:99:AB."


The ifconfig command does not display the secondary IP addresses added with the ip command.

IP aliases can be removed with this command:

 #ifconfig eth0:0 down

This command should not affect the primary IP address associated with eth0 or any additional IP aliases (they should remain up).


Use the preceding command to see whether your kernel can properly support IP aliases. If this command causes the primary IP address associated with this network card to stop working, you need to upgrade to a newer version of the Linux kernel. Also note that this command will not work properly if you attempt to use IP aliases on a different subnet.[10]

Offering Services

Once you are sure a secondary IP address or IP alias can be added to and removed from a network card on your Linux server without affecting the primary IP address, you are ready to tell Heartbeat which services it should offer, and which secondary IP address or IP alias it should use to offer the services.


Secondary IP addresses and IP aliases used by highly available services should always be controlled by Heartbeat (in the /etc/ha.d/haresouces file as described later in this chapter and in the next two chapters). Never use your operating system's ability to add IP aliases as part of the normal boot process (or a script that runs automatically at boot time) on a Heartbeat server. If you do, your server will incorrectly claim ownership of an IP address when it boots. The backup node should always be able to take control of a resource along with its IP address and then reset the power to the primary node without worrying that the primary node will try to use the secondary IP address as part of its normal boot procedure.

Gratuitous ARP (GARP) Broadcasts

As mentioned previously, client computers normally use the Address Resolution Protocol (ARP) to figure out which hardware address owns a particular IP address, and then they store this address in an ARP table. The Heartbeat program uses a little trick, called Gratuitous ARP (GARP) broadcasts, to forcibly update these client computer ARP tables with a new hardware (MAC) addresses when the primary server fails, effectively convincing the client computers to talk to the backup server.[11]

GARP broadcasts[12] are just sneaky ARP broadcasts (broadcasts, remember, are only seen by locally connected nodes). The GARP broadcast asks every node connected to the network, "Who owns this IP address?" when, in fact, the ARP request packet header has a source (or reply) IP address equal to the requested IP address. This forces all nodes connected to the network to update their ARP tables with the new source address.


As of Heartbeat version the send_arp program included with Heartbeat uses both ARP request and ARP reply packets when sending GARPs (send_arp version 1.6). If you experience problems with IP address failover on older versions of Heartbeat, try upgrading to the latest version of Heartbeat.

Heartbeat uses the /usr/lib/heartbeat/send_arp program (formerly /etc/ha.d/resource.d/send_arp) to send these specially crafted GARP broadcasts. You can use this same program to build a script that will send GARPs. The following is an example script (called iptakeover) that uses send_arp to do this.

 # iptakeover script
 # Simple script to take over an IP address.
 # Usage is "iptakeover {start|stop|status}"
 # SENDARP is the program included with the Heartbeat program that
 # sends out an ARP request. Send_arp usage is:
 # REALIP is the IP address for this NIC on your LAN.
 # ROUTERIP is the IP address for your router.
 # SECONDARYIP is the first IP alias for a service/resource.
 # or
 # NETMASK is the netmask of this card.
 # or
 # MACADDR is the hardware address for the NIC card.
 # (You'll find it using the command "/sbin/ifconfig")
 case $1 in
     # Make sure our primary IP is up
     /sbin/ifconfig eth0 $REALIP up
     # Associate the virtual IP address with this NIC
     /sbin/ip addr add $SECONDARYIP1/$NETMASK broadcast $BROADCAST dev eth0
     # Or, to create an IP alias instead of secondary IP address, use the
     # /sbin/ifconfig eth0:0 $IPALIAS1 netmask $NETMASK up
     # Create a new default route directly to the router
     /sbin/route add default gw $ROUTER_IP
     # Now send out 5 Gratuitous ARP broadcasts (ffffffffffff)
     # at two second intervals to tell the local computers to update
     # their ARP tables.
     $SENDARP -i 2000 -r 5 eth0 $SECONDARYIP1 $MACADDR $SECONDARYIP1 ffffffffffff
     # Take down the secondary IP address for the service/resource.
     /sbin/ip addr del $SECONDARYIP1/$NETMASK broadcast $BROADCAST dev eth0
     # or
     /sbin/ifconfig eth0:0 down
     # We check to see if we own the IPALIAS.
     OWN_ALIAS=`ifconfig | grep $SECONDARYIP1`
     if [ "$OWN_ALIAS" != "" ]; then
     echo "OK"
     echo "DOWN"
 # End of the case statement.

You do not need to use this script. It is included here (and on the CD-ROM) to demonstrate exactly how Heartbeat performs GARPs.

The important line in the above code listing is marked in boldface.

This command runs /usr/lib/heartbeat/send_arp and sends an ARP broadcast (to hexadecimal IP address ffffffffffff) with a source and destination address equal to the secondary IP address to be added to the backup server.

You can use this script to find out whether your network equipment supports IP address failover using GARP broadcasts. With the script installed on two Linux computers (and with the MAC address in the script changed to the appropriate address for each computer), you can move an IP address back and forth between the systems to find out whether Heartbeat will be able to do the same thing when it needs to failover a resource.


When using Cisco equipment, you should be able to log on to the router or switch and enter the command show arp to watch the MAC address change as the IP address moves between the two computers. Most routers have similar capabilities.

[10]The "secondary" flag will not be set for the IP alias if it is on a different subnet. See the output of the command ip addr to find out whether this flag is set. (It should be set for Heartbeat to work properly.)

[11]To use the Heartbeat IP failover mechanism with minimal downtime and minimal disruption of the highly available services you should set the ARP cache timeout values on your switches and routers to the shortest time possible. This will increase your ARP broadcast traffic, however, so you will have to find the best trade-off between ARP timeout and increased network traffic for your situation. (Gratuitous ARP broadcasts should update the ARP table entries even before they expire, but if they are missed by your network devices for some reason, it is best to have regular ARP cache refreshes.)

[12]Gratuitous ARPs are briefly described in RFC 2002, "IP Mobility Support." Also see RFC 826, "Ethernet Address Resolution Protocol."

Resource Scripts

All of the scripts under Heartbeat's control are called resource scripts. These scripts may include the ability to add or remove a secondary IP address or IP alias, or they may include packet-handling rules (see Chapter 2) in addition to being able to start and stop a service. Heartbeat looks for resource scripts in the Linux Standards Base (LSB) standard init directory (/etc/init.d) and in the Heartbeat /etc/ha.d/resource.d directory.

Heartbeat should always be able to start and stop your resource by running your resource script and passing it the start or stop argument. As illustrated in the iptakeover script, this can be accomplished with a simple case statement like the following:

 case $1 in
     commands to start my resource
     commands to stop my resource
     commands to test if I own the resource
     echo "Syntax incorrect. You need one of {start|stop|status}"

The first line, #!/bin/bash, makes this file a bash (Bourne Again Shell) script. The second line tests the first argument passed to the script and attempts to match this argument against one of the lines in the script containing a closing parenthesis. The last match, *), is a wildcard match that says, "If you get this far, match on anything passed as an argument." So if the program flow ends up executing these commands, the script needs to complain that it did not receive a start, stop, or status argument.

Status of the Resource

Heartbeat should always know which computer owns the resource or resources being offered. When you write a script to start or stop a resource, it is important to write the script in such a way that it will accurately determine whether the service is currently being offered by the system. If it is, the script should respond with the word OK, Running, or running when passed the status argument—Heartbeat requires one of these three responses if the service is running. What the script returns when the service is not running doesn't matter to Heartbeat—you can use DOWN or STOPPED, for example. However, do not say not running or not OK for the stopped status. (See the "Using init Scripts as Heartbeat Resource Scripts" section later in this chapter for more information on proposed standard exit codes for Linux.)


If you want to write temporary scratch files with your resource script and have Heartbeat always remove these files when it first starts up, write your files into the directory /var/lib/heartbeat/rsctmp (on a standard Heartbeat installation on Linux).

Resource Ownership

So how do you write a script to properly determine if the machine it is running on currently owns the resource? You will need to figure out how to do this for your situation, but the following are two sample tests you might perform in your resource script.

Testing for Resource Ownership—Is the Daemon Running?

Red Hat Linux and SuSE Linux both ship with a program called pidof that can test to see if a daemon is running. The pidof program finds the process identifier (PID) of a running daemon when you give it the name of the program file that was used to start the daemon.

When you run the pidof script from within your resource script you should always pass it the full pathname of the daemon you want to search for in the process table. For example, to determine the PID of the sendmail program, you would use this command:

 #pidof /usr/sbin/sendmail

If sendmail is running, its PID number will be printed, and pidof exits with a return code of zero. If sendmail is not running, pidof does not print anything, and it exits with a nonzero return code.


See the /etc/init.d/functions script distributed with Red Hat Linux for an example use of the pidof program.

A sample bash shell script that uses pidof to determine whether the sendmail daemon is running (when given the status argument) might look like this:

 case $1 in
     if /sbin/pidof /usr/sbin/sendmail >/dev/null; then
         echo "OK"
         echo "DOWN"

If you place these commands in a file, such as /etc/ha.d/resource.d/ myresource, and chmod the file to 755, you can then run the script with this command:

 #/etc/ha.d/resource.d/myresource status

If sendmail is running, the script will return the word OK; if not, the script will say DOWN. (You can stop and start the sendmail daemon for testing purposes by running the script /etc/init.d/myresource and passing it the start or stop argument.)

Testing for Resource Ownership—Do You Own the IP Address?

If you use Heartbeat's built-in ability to automatically failover secondary IP addresses or IP aliases, you do not need to worry about this second test (see the discussion of the IPaddr2 resource script and the discussion of resource groups in Chapter 8). However, if you are using the iptakeover script provided on the CD-ROM, or if you are building a custom configuration that makes changes to network interfaces or the routing table, you may want to check to see if the system really does have the secondary IP address currently configured as part of your script's status test.

To perform this check, the script needs to look at the current secondary IP addresses configured on the system. We can modify the previous script and add the ip command to show the secondary IP address for the eth0 NIC as follows:

 case $1 in
         if /sbin/pidof /usr/sbin/sendmail >/dev/null; then
         ipaliasup=`/sbin/ip addr show dev eth0 | grep`
         if [ "$ipaliasup" != "" ]; then
         echo "OK"
         exit 0
         echo "DOWN"

Note that this script will only work if you use secondary IP addresses. To look for an ip alias, modify this line in the script:

 ipaliasup=`/sbin/ip addr show dev eth0 | grep`

to match the following:

 ipaliasup=`/sbin/ifconfig eth0 | grep`

This changed line of code will now check to see if the IP alias is associated with the interface eth0. If so, this script assumes the interface is up and working properly, and it returns a status of OK and exits from the script with a return value of 0.


Testing for a secondary IP address or an IP alias should only be added to resource scripts in special cases; see Chapter 8 for details.

Testing a Resource Script

You can test to make sure the Heartbeat system will work properly with your newly created resource script by running the following command:

 #/usr/lib/heartbeat/ResourceManager status <resource-name>

where <resource-name> is the name of your newly created resource script in the /etc/rc.d/init.d or /etc/ha.d/resource.d directory. Then tell your shell to display the value of your resource script's return code with this command:

 #echo $?

The $? must be the next command you type after running the script— otherwise the return value stored in the shell variable ? will be overwritten.

If the return code is 3, it means your script returned something other than OK, Running, or running, which means that Heartbeat thinks that it does not own the resource. If this command returns 0, Heartbeat considers the resource active.

Once you have built a proper resource script and placed it in the /etc/ rc.d/init.d directory or the /etc/ha.d/resource.d directory and thoroughly tested its ability to handle the start, stop, and status arguments, you are ready to configure Heartbeat.

Using init Scripts as Heartbeat Resource Scripts

Most of the /etc/init.d scripts (the scripts used at system boot time by the init process) will already properly handle the start, stop, and status arguments. In fact, the Linux Standard Base Specification (see states that in future releases of Linux, all Linux init scripts running on all versions of Linux that support LSB must implement the following: start, stop, restart, reload, force-reload, and status.

The LSB project also states that if the status argument is sent to an init script it should return 0 if the program is running.

For the moment, however, you need only rely on the convention that a status command should return the word OK, or Running, or running to standard output (or "standard out").[13] For example, Red Hat Linux scripts, when passed the status argument, normally return a line such as the following example from the /etc/rc.d/init.d/sendmail status command:

 sendmail (pid 4511) is running...

Heartbeat looks for the word running in this output, and it will ignore everything else, so you do not need to modify Red Hat's init scripts to use them with Heartbeat (unless you want to also add the test for ownership of the secondary IP address or IP alias).

However, once you tell Heartbeat to manage a resource or script, you need to make sure the script does not run during the normal boot (init) process. You can do this by entering this command:

 #chkconfig --del <scriptname>

where <scriptname> is the name of the file in the /etc/init.d directory. (See Chapter 1 for more information about the chkconfig command.)

The init script must not run as part of the normal boot (init) process, because you want Heartbeat to decide when to start (and stop) the daemon. If you start the Heartbeat program but already have a daemon running that you have asked Heartbeat to control, you will see a message like the following in the /var/log/messages file:

 WARNING: Non-idle resources will affect resource takeback.

If you see this error message when you start Heartbeat, you should stop all of the resources you have asked Heartbeat to manage, remove them from the init process (with the chkconfig --del <scriptname> command), and then restart the Heartbeat daemon (or reboot) to put things in a proper state.

[13]Programs and shell scripts can return output to standard output or to standard error. This allows programmers to control, through the use of redirection symbols, where the message will end up—see the "REDIRECTION" section of the bash man page for details.

Heartbeat Configuration Files

Heartbeat uses three configuration files:


Specifies how the heartbeat daemons on each system communicate with each other.


Specifies which server should normally act as the primary server for a particular resource and which server should own the IP address that client computers will use to access the resource.


Specifies how the Heartbeat packets should be encrypted.


The ha in these scripts stands for high availability. (The name "haresources" is not related to bunny rabbits.)

In Conclusion

This chapter has introduced the Heartbeat package and the theory behind properly deploying it. To ensure client computers always have access to a resource, use the Heartbeat package to make the resource highly available. A highly available resource does not fail even if the computer it is running on crashes.

In Part III of this book I'll describe how to make the cluster load-balancing resource highly available. A cluster load-balancing resource that is highly available is the cornerstone of a cluster that is able to support your enterprise.

In the next few chapters I'll focus some more on how to properly use the Heartbeat package and avoid the typical pitfalls you are likely to encounter when trying to make a resource highly available.

Часть 7: Пример конфигурации Heartbeat

The last chapter introduced the three configuration files used to control the operation of Heartbeat:, haresources, and authkeys. The recipe in this chapter describes how to use these files on two servers, or nodes, so that Heartbeat can deploy resources in a high-availability configuration.

In a normal Heartbeat configuration, these files will be the same on the primary and the backup server (complex and subtle problems can be introduced into your high-availability configuration if they are not). The recipe in this chapter is a lab exercise that demonstrates how the Heartbeat system starts resources and fails them over to a backup server.


List of ingredients:

  • 2 servers running Linux (each with two network interface cards and cables)

  • 1 crossover cable (or standard network cables and a mini hub) and/or a serial cable

  • 1 copy of the Heartbeat software package (from or the CD-ROM)


In this recipe, one of the Linux servers will be called the primary server and the other the backup server. Begin by connecting these two systems to each other using a crossover network cable, a normal network connection (through a mini hub or separate VLAN), or a null modem serial cable. (See Appendix C for information on adding NICs and connecting the servers.) We'll use the network connection or serial connection exclusively for heartbeats messages (see Figure 7-1).

Image from book
Figure 7-1: The Heartbeat network configuration

If you are using an Ethernet heartbeat connection, assign IP addresses to the primary and backup servers for this network connection using an IP address from RFC 1918. For example, use on the primary server and on the backup server as shown in Figure 7-1. (These are IP addresses that will only be known to the primary and backup servers.)

RFC 1918 defines the following IP address ranges for "private internets":

  • to (10/8 prefix)

  • to (172.16/12 prefix)

  • to (192.168/16 prefix)

In this recipe, we will use as the primary server's heartbeat IP address and as the secondary server's heartbeat IP address.

Before taking the following steps, be sure that you can ping between these two systems on the network connection that you added (the crossover cable or separate network connection through a mini hub/dedicated VLAN). That is, you should get a response when you enter ping on the primary server, and ping on the backup server. (See Appendix C if you need help.)

Step 1: Install Heartbeat

The CD-ROM included with this book contains the Heartbeat RPMs. You can install these versions using the following commands (though you may instead want to check for the latest versions available for download from

 #mount /mnt/cdrom
 #rpm -ivh /mnt/cdrom/chapter7/heartbeat-pils-*.rpm[1]
 #rpm -ivh /mnt/cdrom/chapter7/hearbeat-stonith-*.rpm
 #rpm -ivh /mnt/cdrom/chapter7/hearbeat-*i386.rpm

You do not need the source (src) RPM file for this recipe. The ldirectord software is also not required in this chapter—it will be discussed in Chapter 15.

Once the RPM package finishes installing, you should have an /etc/ha.d directory where Heartbeat's configuration files and scripts reside. You should also have a /usr/share/doc/packages/heartbeat directory containing sample configuration files and documentation.

[1]The PILS package was introduced with version 0.4.9d of Heartbeat. PILS is Heartbeat's generalized plug-in and interface loading system, and it is required for normal Heartbeat operation.

[2]Note that the SNMP package used to be called ucd-snmp, but it has been renamed net-snmp.

Step 2: Configure /etc/ha.d/

Now you need to tell the heartbeat daemons that you want them to use the new Ethernet network (or the Ethernet crossover or serial cable) to send and receive heartbeat packets.

  1. Find the sample configuration file that the Heartbeat RPM installed by using this command:

     #rpm -qd heartbeat | grep
  2. Copy the sample configuration file into place with this command:

     #cp /usr/share/doc/packages/heartbeat/ /etc/ha.d/
  3. Edit the /etc/ha.d/ file, and uncomment these lines:

     #udpport           694
     #bcast      eth0          # Linux

    Older versions of Heartbeat use udp instead of bcast.

    For example, to use eth1 to send heartbeat messages between your primary and backup server, the second line would look like this:

     bcast eth1

    If you use two physical network connections to carry heartbeats, change the second line to this:

     bcast eth0 eth1

    If you use a serial connection and an Ethernet connection, uncomment the lines for serial Heartbeat communication:

     serial /dev/ttyS0
     baud   19200
  4. Also, uncomment the keepalive, deadtime, and initdead lines so that they look like this:

     keepalive 2
     deadtime 30
     initdead 120

    The initdead line specifies that after the heartbeat daemon first starts, it should wait 120 seconds before starting any resources on the primary server, or making any assumptions that something has gone wrong on the backup server. The keepalive line specifies how many seconds there should be between heartbeats (status messages, which were described in Chapter 6), and the deadtime line specifies how long the backup server will wait without receiving a heartbeat from the primary server before assuming something has gone wrong. If you change these numbers, Heartbeat may send warning messages that indicate that you have set the values improperly (for example, you might set a deadtime too close to the keepalive time to ensure a safe configuration).

  5. Add the following two lines to the end of the /etc/ha.d/ file:


    Or, you can use one line for both nodes with an entry like this:


    The and entries should be replaced with the names you have assigned to your two hosts when you installed Linux on them (as returned by the uname -n command).

    The host names of the primary and backup servers are not usually related to any of the services offered by the servers. For example, the host name of your primary server might be, even though it will host a web server called


On Red Hat systems, the host name is specified in the /etc/sysconfig/network file using the hostname variable, and it is set at boot time (though it can be changed on a running system by using the hostname command).

Step 3: Configure /etc/ha.d/haresources

Normally the /etc/ha.d/haresources file will contain the names of the resources the primary server should own. Heartbeat generally controls its resources with the normal init script that comes with your distribution, or with a script you build, such as the iptakeover script shown in Chapter 6, but for now we will use a very simple test script to see how they work. Follow these steps:

  1. Create a script called test1 in the /etc/ha.d/resource.d directory with the following command (this script is also included on the CD-ROM in the chapter7 subdirectory):

     #vi /etc/ha.d/resource.d/test1
  2. Now press the I key to enter insert mode, and then enter the following simple bash script:

     logger $0 called with $1
     case "$1" in
         # Start commands go here
     # Stop commands go here
         # Status commands go here

    The logger command in this script is used to send a message to the syslog daemon. syslog will then write the message to the proper log file based on the rules in the /etc/syslog.conf file. (See the man pages for logger and syslog for more information about custom message logging.)


    The case statement in this script does not do anything. It is included here only as a template for further development of your own custom resource script that can handle the start, stop, and status arguments that Heartbeat will use to control it.

  3. Quit this file and save your changes (press ESC, and then type :wq).

  4. Make this script executable by entering this command:

     #chmod 755 /etc/ha.d/resource.d/test
  5. Run the script by entering the following command:

     #/etc/ha.d/resource.d/test start
  6. You should be returned to the shell prompt where you can enter this command to see the end of the messages:

     #tail /var/log/messages

    The last message line in the /var/log/messages file should look like this:

     [timestamp] localhost root: /etc/ha.d/resource.d/test called with start

    This line indicates that the test resource script was called with the start argument and is using the case statement inside the script properly.

Configure the Haresources File

You can use this simple test script to see what Heartbeat does with resources by telling it that test is a resource. To do so, locate and copy the sample haresources file into place with these commands:

 #rpm -qd heartbeat | grep haresources
 #cp /usr/share/doc/packages/heartbeat/haresources /etc/ha.d

Now edit /etc/ha.d/haresources (which really just contains commented documentation at this point), and add the following line to the end of the file: test

In this example, should be replaced with the name of the primary server as returned by the command uname -n.

The haresources file tells the Heartbeat program which machine owns a resource. The resource name is really a script in either the /etc/init.d directory or the /etc/ha.d/resource.d directory. (A copy of this script must exist on both the primary and the backup server.)

Heartbeat uses the haresources configuration file to determine what to do when it first starts up. For example, if you specify that the httpd script (/etc/init.d/httpd) is a resource, Heartbeat will run this script and pass it the start command when you start the heartbeat daemon on the primary server. If you tell Heartbeat to stop (with the command /etc/init.d/heartbeat stop or service heartbeat stop), Heartbeat will run the /etc/init.d/httpd script and send it the stop command. Therefore, if Heartbeat is not running, the resource daemon (httpd in this case) will not be running.

If you use a previously existing init script (from the /etc/init.d directory) that was included with your distribution, be sure to tell the system not to start this script during the normal boot process with the chkconfig --del <scriptname> command. (See Chapter 1 for a detailed description of the chkconfig command.)


If you're not using Red Hat or SuSE, you should also make sure the script prints OK, Running, or running when it is passed the status argument. See Chapter 6 for a discussion of using init scripts as Heartbeat resource scripts.

The power and flexibility available for controlling resources using the haresources file will be covered in depth in Chapter 8.

[3]In a good high-availability configuration, the, haresources, and authkeys files will be the same on the primary and the backup server.

Step 4: Configure /etc/ha.d/authkeys

In this step we will install the security configuration file called /etc/ha.d/ authkeys. The sample configuration file included with the Heartbeat distribution should be modified to protect your configuration from an attack by doing the following:

  1. Locate and copy the sample authkeys file into place with these commands:

     #rpm -qd heartbeat | grep authkeys
     #cp /usr/share/doc/packages/heartbeat/authkeys /etc/ha.d
  2. Edit the /etc/ha.d/authkeys file so the only uncommented lines look like this:

     1 sha1 testlab

    In these lines, don't mistake the number 1 for the letter l. The first line is auth followed by the digit one, and the second line is the digit one followed by sha, the digit one, and then testlab.

    In this example, testlab is the digital signature key used to digitally sign the heartbeat packets, and Secure Hash Algorithm 1 (sha1) is the digital signature method to be used. Change testlab in this example to a password you create, and make sure it is the same on both systems.

  3. Make sure the authkeys file is only readable by root:

     #chmod 600 /etc/ha.d/authkeys

If you fail to change the security of this file using this chmod command, the Heartbeat program will not start, and it will complain in the /var/log/messages file that you have secured this file improperly.

Step 5: Install Heartbeat on the Backup Server

Install the RPM packages on the backup server as described in the "Step 1: Install Heartbeat" section of this chapter, or use the SystemImager package described in Chapter 5 to clone the primary server. Then execute the following command on the primary server to copy all of the configuration files over to the backup system:

 #scp -r /etc/ha.d backupnode:/etc/ha.d

Here backupnode is the IP address or host name (as defined in /etc/hosts on the primary server) of the backup server. The scp command is the secure copy command that uses the SSH protocol to copy data between the two nodes. The -r option to scp recursively copies all of the files and directories underneath the /etc/ha.d directory on the primary server.

The first time you run this command, you will be asked if you are sure you want to allow the connection before the copy operation begins. Also, if you have not inserted the private key for the backup server into the primary server's SSH configuration file for the root account, you will be prompted for the root password on the backup server. (See Chapter 4 for more information about the SSH protocol.)

Step 6: Set the System Time

While Heartbeat does not require the primary and the backup servers to have synchronized system clocks, the system times on the two servers should be within a few minutes of each other, or some high-availability services may misbehave under some circumstances. You should manually check and set the system time (with the date command) before starting Heartbeat on both systems.


For a better long-term solution you should synchronize the clocks on both systems using the NTP software.

Step 7: Launch Heartbeat

Before starting the heartbeat daemon, run the following ResourceManager tests on the primary server and the backup server to make sure you have set up the configuration files correctly:

 #/usr/lib/heartbeat/ResourceManager listkeys `/bin/uname -n`

This command looks at the /etc/ha.d/haresources file and returns the list of resources (or resource keys) from this file. The only resource we have defined so far is the test resource, so the output of this command should simply be the following:


Launch Heartbeat on the Primary Server

Once you have the test resource script configured properly on the primary server, start the Heartbeat program, and watch what happens in the /var/log/ messages file.

Start Heartbeat (on the primary server) with one of these commands:

 #/etc/init.d/heartbeat start


 #service heartbeat start

Then look at the system log again, with this command:

 #tail /var/log/messages

To avoid retyping this command every few seconds while Heartbeat is coming up, tell the tail command to display new information on your screen as it is appended to the /var/log/messages file by using this command:

 #tail -f /var/log/messages

Press CTRL-C to break out of this command.


To change the logging file Heartbeat uses, uncomment the following line in your /etc/ha.d/ file:

 logfile        /var/log/ha-log

You can then use the tail -f /var/log/ha-log command to watch what Heartbeat is doing more closely. However, the examples in this recipe will always use the /var/log/messages file.[4] (This does not change the amount of logging taking place.)

Heartbeat will wait for the amount of time configured for initdead in the /etc/ha.d/ file before it finishes its startup procedure, so you will have to wait at least two minutes for Heartbeat to start up (initdead was set to 120 seconds in the test configuration file).

When Heartbeat starts successfully, you should see the following message in the /var/log/messages file (I've removed the timestamp information from the beginning of each of these lines to make the messages more readable):

 primary root: test called with status
 primary heartbeat[4410]: info: **************************
 primary heartbeat[4410]: info: Configuration validated. Starting heartbeat
 primary heartbeat[4411]: info: heartbeat: version <version>
 primary heartbeat[2882]: WARN: No Previous generation - starting at 1[5]
 primary heartbeat[4411]: info: Heartbeat generation: 1
 primary heartbeat[4411]: info: UDP Broadcast heartbeat started on port 694
 (694) interface eth1
 primary heartbeat[4414]: info: pid 4414 locked in memory.
 primary heartbeat[4415]: info: pid 4415 locked in memory.
 primary heartbeat[4416]: info: pid 4416 locked in memory.
 primary heartbeat[4416]: info: Local status now set to: 'up'
 primary heartbeat[4411]: info: pid 4411 locked in memory.
 primary heartbeat[4416]: info: Local status now set to: 'active'
 primary logger: test called with status
 primary last message repeated 2 times
 primary heartbeat: info: Acquiring resource group: test
 primary heartbeat: info: Running /etc/init.d/test  start
 primary logger: test called with start
 primary heartbeat[4417]: info: Resource acquisition completed.
 primary heartbeat[4416]: info: Link up.

The test script created earlier in the chapter does not return the word OK, Running, or running when called with the status argument, so Heartbeat assumes that the daemon is not running and runs the script with the start argument to acquire the test resource (which doesn't really do anything at this point). This can be seen in the preceding output. Notice this line: heartbeat[2886]: WARN: node is dead

Heartbeat warns you that the backup server is dead because the heartbeat daemon hasn't yet been started on the backup server.


Heartbeat messages never contain the word ERROR or CRIT for anything that should occur under normal conditions (even during a failover). If you see an ERROR or CRIT message from Heartbeat, action is probably required on your part to resolve the problem.

Launch Heartbeat on the Backup Server

Once Heartbeat is running on the primary server, log on to the backup server, and start Heartbeat with this command:

 # /etc/init.d/heartbeat start

The /var/log/messages file on the backup server should soon contain the following:

 backup heartbeat[4650]: info: **************************
 backup heartbeat[4650]: info: Configuration validated. Starting heartbeat
 backup heartbeat[4651]: info: heartbeat: version <version>
 backup heartbeat[4651]: info: Heartbeat generation: 9
 backup heartbeat[4651]: info: UDP Broadcast heartbeat started on port 694
 (694) interface eth1
 backup heartbeat[4654]: info: pid 4654 locked in memory.
 backup heartbeat[4655]: info: pid 4655 locked in memory.
 backup heartbeat[4656]: info: pid 4656 locked in memory.
 backup heartbeat[4656]: info: Local status now set to: 'up'
 backup heartbeat[4651]: info: pid 4651 locked in memory.
 backup heartbeat[4656]: info: Link up.
 backup heartbeat[4656]: info: Node status active
 backup heartbeat: info: Running /etc/ha.d/rc.d/status status
 backup heartbeat: info: Running /etc/ha.d/rc.d/ifstat ifstat
 backup heartbeat: info: Running /etc/ha.d/rc.d/ifstat ifstat
 backup heartbeat[4656]: info: No local resources [/usr/lib/heartbeat/
 ResourceManager listkeys] heartbeat[4656]: info: Resource acquisition completed.

Notice in this output how Heartbeat declares that this machine (the backup server) does not have any local resources in the /etc/ha.d/haresources file. This machine will act as a backup server and sit idle, simply listening for heartbeats from the primary server until that server fails. Heartbeat did not need to run the test script (/etc/ha.d/resource.d/test). The Resource acquisition completed message is a bit misleading because there were no resources for Heartbeat to acquire.


All resource script files that you refer to in the haresources file must exist and have execute permissions[6] before Heartbeat will start.

Examining the Log Files on the Primary Server

Now that the backup server is up and running, Heartbeat on the primary server should be detecting heartbeats from the backup server. At the end of the /var/log/messages file you should see the following:

 primary heartbeat[2886]: info: Heartbeat restart on node
 primary heartbeat[2886]: info: Link up.
 primary heartbeat[2886]: info: Node status up
 primary heartbeat: info: Running /etc/ha.d/rc.d/status status
 primary heartbeat: info: Running /etc/ha.d/rc.d/ifstat ifstat
 primary heartbeat[2886]: info: Node status active
 primary heartbeat: info: Running /etc/ha.d/rc.d/status status

If the primary server does not automatically recognize that the backup server is running, check to make sure that the two machines are on the same network, that they have the same broadcast address, and that no firewall rules are filtering out the packets. (Use the ifconfig command on both systems and compare the bcast numbers; they should be the same on both.) You can also use the tcpdump command to see if the heartbeat broadcasts are reaching both nodes:

 #tcpdump -i all -n -p udp port 694

This command should capture and display the heartbeat broadcast packets from either the primary or the backup server.

[4]The recipe for this chapter is using this method so that you can also watch for messages from the resource scripts with a single command (tail -f /var/log/messages).

[5]The Heartbeat "generation" number is incremented each time Heartbeat is started. If Heartbeat notices that its partner server changes generation numbers, it can take the proper action depending upon the situation. (For example, if Heartbeat thought a node was dead and then later receives another Heartbeat from the same node, it will look at the generation number. If the generation number did not increment, Heartbeat will suspect a split-brain condition and force a local restart.)

[6]Script files are normally owned by user root, and they have their permission bits configured using a command such as this: chmod 755 <scriptname>.

Stopping and Starting Heartbeat

Now is a good time to practice starting and stopping Heartbeat on both the primary and backup machines to see what happens.

Go ahead and stop Heartbeat on the primary server with one of these commands:

 #/etc/init.d/heartbeat stop


 #service heartbeat stop

You should see the backup server declare that the primary server has failed, and then it should run the /etc/ha.d/resource.d/test script with the start argument. The /var/log/messages file on the backup server contain messages like these: heartbeat[5725]: WARN: node is dead heartbeat[5725]: info: Link
 dead. heartbeat: info: Running /etc/ha.d/rc.d/status status heartbeat: info: Running /etc/ha.d/rc.d/ifstat ifstat heartbeat: info: Taking over resource group test
 *** /etc/ha.d/resource.d/test called with status heartbeat: info: Acquiring resource group:
 com test heartbeat: info: Running /etc/ha.d/resource.d/test start
 *** /etc/ha.d/resource.d/test called with start heartbeat: info: mach_down takeover complete.

The /etc/ha.d/resource.d/test resource or script was first called with the status argument and then with the start argument to complete the failover.

Once that's done, try starting Heartbeat on the primary server again, and watch what happens. The test script on the backup server should be called with the stop argument, and it should be called with the start argument on the primary server.[7]

Once you have performed these tests, you may also want to test other means of initiating a failover (such as unplugging the power or network connection on the primary server).


No failover will take place if heartbeat packets continue to arrive at the backup server. So, if you specified two paths for heartbeats in the /etc/ha.d/ file (a null modem serial cable and a crossover Ethernet cable, for example) unplugging only one of the physical paths will not cause a failover—both must be disconnected before a failover will be initiated by the backup server.

[7]This assumes you have enabled auto_failback, which will be discussed in Chapter 9.

Monitoring Resources

Heartbeat currently does not monitor the resources it starts to see if they are running, healthy, and accessible to client computers. To monitor these resources, you need to use a separate software package called Mon (discussed in Chapter 17).

With this in mind, some system administrators are tempted to place Heartbeat on their production network and to use a single network for both heartbeat packets and client computer access to resources. This sounds like a good idea at first, because a failure of the primary computer's connection to the production network means client computers cannot access the resources, and a failover to the backup server may restore access to the resources. However, this failover may also be unnecessary (if the problem is with the network and not with the primary server), and it may not result in any improvement. Or, worse yet, it may result in a split-brain condition.

For example, if the resource is a shared SCSI disk, a split-brain condition will occur if the production network cable is disconnected from the backup server. This means the two servers incorrectly think that they have exclusive write access to the same disk drive, and they may then damage or destroy the data on the shared partition. Heartbeat should only failover to the backup server—according to the high-availability design principles used to develop it—if the primary server is no longer healthy. Using the production network as the one and only path for heartbeats is a bad practice and should be avoided.


Normally the only reason to use the Heartbeat package is to ensure exclusive access to a resource running on a single computer. If the service can be offered from multiple computers at the same time (and exclusive access is not required) you should avoid the unnecessary complexity of a Heartbeat high-availability failover configuration.

In Conclusion

This chapter used a "test" resource script to demonstrate how Heartbeat starts a resource and fails it over to a backup server when the primary server goes down. We've looked at a sample configuration using the three configuration files /etc/ha.d/, /etc/ha.d/haresources, and /etc/ha.d/authkeys. These configuration files should always be the same on the primary and the backup servers to avoid confusion and errant behavior of the Heartbeat system.

The Heartbeat package was designed to failover all resources when the primary server is no longer healthy. Best practices therefore dictate that we should build multiple physical paths for heartbeat status messages in order to avoid unnecessary failovers. Do not be tempted to use the production network as the only path for heartbeats between the primary and backup servers.

The examples in this chapter did not contain an IP address as part of the resource configuration, so no Gratuitous ARP broadcasts were used. Neither did we test a client computer's ability to connect to a service running on the servers.

Once you have tested resource failover as described in this chapter (and you understand how a resource script is called by the Heartbeat program), you are ready to insert an IP address into the /etc/ha.d/haresources file (or into the iptakeover script introduced in Chapter 6). This is the subject of Chapter 8, along with a detailed discussion of the /etc/ha.d/haresources file.

Оставьте свой комментарий !

Ваше имя:
Оба поля являются обязательными

 Автор  Комментарий к данной статье