Search     or:     and:
  Краткое описание
 W. R. Стивенс TCP 
 W. R. Стивенс IPC 
 K. Bauer 
 Gary V. Vaughan 
 Д Вилер 
 В. Сталлинг 
 Pramode C.E. 
 Steve Pate 
 William Gropp 
 С Бекман 
 Р Стивенс 
 Mendel Cooper 
 М Перри 
 C.S. Rodriguez 
 Robert Love 
 Daniel Bovet 
 Д Джеф 
 G. Kroah-Hartman 
 B. Hansen 
Последние статьи :
  Тренажёр 16.01   
  Эльбрус 05.12   
  Алгоритмы 12.04   
  Rust 07.11   
  Go 25.12   
  EXT4 10.11   
  FS benchmark 15.09   
  Сетунь 23.07   
  Trees 25.06   
  Apache 03.02   
TOP 20
 Secure Programming for Li...199 
 Stein-MacEachern-> Час...189 
 Daniel Bovet 1...175 
 Kamran Husain...152 
 2.0-> Linux IP Networking...146 
 Rodriguez 6...134 
 Steve Pate 3...127 
 Httpd-> История Ap...114 
 Steve Pate 1...110 
 Stewens -> IPC 4...109 
 2.6-> VM 2.6...107 
 Ext4 FS...102 
 Ethreal 4...92 
  01.05.2022 : 3295756 посещений

File and Archiving Commands



The standard UNIX archiving utility. Originally a Tape ARchiving program, it has developed into a general purpose package that can handle all manner of archiving with all types of destination devices, ranging from tape drives to regular files to even stdout (see Example 3-4). GNU tar has been patched to accept various compression filters, such as tar czvf archive_name.tar.gz *, which recursively archives and gzips all files in a directory tree except dotfiles in the current working directory ($PWD). [1]

Some useful tar options:

  1. -c create (a new archive)

  2. -x extract (files from existing archive)

  3. --delete delete (files from existing archive)


    This option will not work on magnetic tape devices.

  4. -r append (files to existing archive)

  5. -A append (tar files to existing archive)

  6. -t list (contents of existing archive)

  7. -u update archive

  8. -d compare archive with specified filesystem

  9. -z gzip the archive

    (compress or uncompress, depending on whether combined with the -c or -x) option

  10. -j bzip2 the archive


It may be difficult to recover data from a corrupted gzipped tar archive. When archiving important files, make multiple backups.


Shell archiving utility. The files in a shell archive are concatenated without compression, and the resultant archive is essentially a shell script, complete with #!/bin/sh header, and containing all the necessary unarchiving commands. Shar archives still show up in Internet newsgroups, but otherwise shar has been pretty well replaced by tar/gzip. The unshar command unpacks shar archives.


Creation and manipulation utility for archives, mainly used for binary object file libraries.


The Red Hat Package Manager, or rpm utility provides a wrapper for source or binary archives. It includes commands for installing and checking the integrity of packages, among other things.

A simple rpm -i package_name.rpm usually suffices to install a package, though there are many more options available.


rpm -qf identifies which package a file originates from.

bash$ rpm -qf /bin/ls


rpm -qa gives a complete list of all installed rpm packages on a given system. An rpm -qa package_name lists only the package(s) corresponding to package_name.

bash$ rpm -qa
 bash$ rpm -qa docbook-utils
 bash$ rpm -qa docbook | grep docbook


This specialized archiving copy command (copy input and output) is rarely seen any more, having been supplanted by tar/gzip. It still has its uses, such as moving a directory tree.

Example 12-27. Using cpio to move a directory tree

 # Copying a directory tree using 'cpio.'
 # Advantages of using 'cpio':
 #   Speed of copying. It's faster than 'tar' with pipes.
 #   Well suited for copying special files (named pipes, etc.)
 #+  that 'cp' may choke on.
 if [ $# -ne "$ARGS" ]
   echo "Usage: `basename $0` source destination"
   exit $E_BADARGS
 find "$source" -depth | cpio -admvp "$destination"
 #               ^^^^^         ^^^^^
 # Read the 'find' and 'cpio' man page to decipher these options.
 # Exercise:
 # --------
 #  Add code to check the exit status ($?) of the 'find | cpio' pipe
 #+ and output appropriate error messages if anything went wrong.
 exit 0

This command extracts a cpio archive from an rpm one.

Example 12-28. Unpacking an rpm archive

 # Unpack an 'rpm' archive
 : ${1?"Usage: `basename $0` target-file"}
 # Must specify 'rpm' archive name as an argument.
 TEMPFILE=$$.cpio                         # Tempfile with "unique" name.
                                          # $$ is process ID of script.
 rpm2cpio < $1 > $TEMPFILE                # Converts rpm archive into cpio archive.
 cpio --make-directories -F $TEMPFILE -i  # Unpacks cpio archive.
 rm -f $TEMPFILE                          # Deletes cpio archive.
 exit 0
 #  Exercise:
 #  Add check for whether 1) "target-file" exists and
 #+                       2) it is really an rpm archive.
 #  Hint:                    parse output of 'file' command.



The standard GNU/UNIX compression utility, replacing the inferior and proprietary compress. The corresponding decompression command is gunzip, which is the equivalent of gzip -d.

The zcat filter decompresses a gzipped file to stdout, as possible input to a pipe or redirection. This is, in effect, a cat command that works on compressed files (including files processed with the older compress utility). The zcat command is equivalent to gzip -dc.


On some commercial UNIX systems, zcat is a synonym for uncompress -c, and will not work on gzipped files.

See also Example 7-7.


An alternate compression utility, usually more efficient (but slower) than gzip, especially on large files. The corresponding decompression command is bunzip2.


Newer versions of tar have been patched with bzip2 support.

compress, uncompress

This is an older, proprietary compression utility found in commercial UNIX distributions. The more efficient gzip has largely replaced it. Linux distributions generally include a compress workalike for compatibility, although gunzip can unarchive files treated with compress.


The znew command transforms compressed files into gzipped ones.


Yet another compression utility, a filter that works only on sorted ASCII word lists. It uses the standard invocation syntax for a filter, sq < input-file > output-file. Fast, but not nearly as efficient as gzip. The corresponding uncompression filter is unsq, invoked like sq.


The output of sq may be piped to gzip for further compression.

zip, unzip

Cross-platform file archiving and compression utility compatible with DOS pkzip.exe. "Zipped" archives seem to be a more acceptable medium of exchange on the Internet than "tarballs".

unarc, unarj, unrar

These Linux utilities permit unpacking archives compressed with the DOS arc.exe, arj.exe, and rar.exe programs.

File Information


A utility for identifying file types. The command file file-name will return a file specification for file-name, such as ascii text or data. It references the magic numbers found in /usr/share/magic, /etc/magic, or /usr/lib/magic, depending on the Linux/UNIX distribution.

The -f option causes file to run in batch mode, to read from a designated file a list of filenames to analyze. The -z option, when used on a compressed target file, forces an attempt to analyze the uncompressed file type.

bash$ file test.tar.gz
 test.tar.gz: gzip compressed data, deflated, last modified: Sun Sep 16 13:34:51 2001, os: Unix
 bash file -z test.tar.gz
 test.tar.gz: GNU tar archive (gzip compressed data, deflated, last modified: Sun Sep 16 13:34:51 2001, os: Unix)

# Find sh and Bash scripts in a given directory:
 # Bourne and Bourne-Again shell scripts
 file $DIRECTORY/* | fgrep $KEYWORD
 # Output:
 # /usr/local/bin/burn-cd:          Bourne-Again shell script text executable
 # /usr/local/bin/burnit:           Bourne-Again shell script text executable
 # /usr/local/bin/      Bourne shell script text executable
 # /usr/local/bin/copy-cd:          Bourne-Again shell script text executable
 # . . .

Example 12-29. Stripping comments from C program files

 # Strips out the comments (/* COMMENT */) in a C program.
 if [ $# -eq "$E_NOARGS" ]
   echo "Usage: `basename $0` C-program-file" >&2 # Error message to stderr.
   exit $E_ARGERROR
 # Test for correct file type.
 type=`file $1 | awk '{ print $2, $3, $4, $5 }'`
 # "file $1" echoes file type . . .
 # Then awk removes the first field of this, the filename . . .
 # Then the result is fed into the variable "type".
 correct_type="ASCII C program text"
 if [ "$type" != "$correct_type" ]
   echo "This script works on C program files only."
 # Rather cryptic sed script:
 sed '
 ' $1
 # Easy to understand if you take several hours to learn sed fundamentals.
 #  Need to add one more line to the sed script to deal with
 #+ case where line of code has a comment following it on same line.
 #  This is left as a non-trivial exercise.
 #  Also, the above code deletes lines with a "*/" or "/*",
 #+ not a desirable result.
 exit 0
 # ----------------------------------------------------------------
 # Code below this line will not execute because of 'exit 0' above.
 # Stephane Chazelas suggests the following alternative:
 usage() {
   echo "Usage: `basename $0` C-program-file" >&2
   exit 1
 WEIRD=`echo -n -e '\377'`   # or WEIRD=$'\377'
 [[ $# -eq 1 ]] || usage
 case `file "$1"` in
   *"C program text"*) sed -e "s%/\*%${WEIRD}%g;s%\*/%${WEIRD}%g" "$1" \
      | tr '\377\n' '\n\377' \
      | sed -ne 'p;n' \
      | tr -d '\n' | tr '\377' '\n';;
   *) usage;;
 #  This is still fooled by things like:
 #  printf("/*");
 #  or
 #  /*  /* buggy embedded comment */
 #  To handle all special cases (comments in strings, comments in string
 #+ where there is a \", \\" ...) the only way is to write a C parser
 #+ (using lex or yacc perhaps?).
 exit 0

which command-xxx gives the full path to "command-xxx". This is useful for finding out whether a particular command or utility is installed on the system.

$bash which rm


Similar to which, above, whereis command-xxx gives the full path to "command-xxx", but also to its manpage.

$bash whereis rm
rm: /bin/rm /usr/share/man/man1/rm.1.bz2


whatis filexxx looks up "filexxx" in the whatis database. This is useful for identifying system commands and important configuration files. Consider it a simplified man command.

$bash whatis whatis
whatis               (1)  - search the whatis database for complete words

Example 12-30. Exploring /usr/X11R6/bin

 # What are all those mysterious binaries in /usr/X11R6/bin?
 # Try also "/bin", "/usr/bin", "/usr/local/bin", etc.
 for file in $DIRECTORY/*
   whatis `basename $file`   # Echoes info about the binary.
 exit 0
 # You may wish to redirect output of this script, like so:
 # ./ >>whatis.db
 # or view it a page at a time on stdout,
 # ./ | less

See also Example 10-3.


Show a detailed directory listing. The effect is similar to ls -l.

This is one of the GNU fileutils.

bash$ vdir
 total 10
  -rw-r--r--    1 bozo  bozo      4034 Jul 18 22:04 data1.xrolo
  -rw-r--r--    1 bozo  bozo      4602 May 25 13:58 data1.xrolo.bak
  -rw-r--r--    1 bozo  bozo       877 Dec 17  2000 employment.xrolo
 bash ls -l
 total 10
  -rw-r--r--    1 bozo  bozo      4034 Jul 18 22:04 data1.xrolo
  -rw-r--r--    1 bozo  bozo      4602 May 25 13:58 data1.xrolo.bak
  -rw-r--r--    1 bozo  bozo       877 Dec 17  2000 employment.xrolo

locate, slocate

The locate command searches for files using a database stored for just that purpose. The slocate command is the secure version of locate (which may be aliased to slocate).

$bash locate hickson


Disclose the file that a symbolic link points to.

bash$ readlink /usr/bin/awk


Use the strings command to find printable strings in a binary or data file. It will list sequences of printable characters found in the target file. This might be handy for a quick 'n dirty examination of a core dump or for looking at an unknown graphic image file (strings image-file | more might show something like JFIF, which would identify the file as a jpeg graphic). In a script, you would probably parse the output of strings with grep or sed. See Example 10-7 and Example 10-9.

Example 12-31. An "improved" strings command

 # "word-strings" (enhanced "strings" command)
 #  This script filters the output of "strings" by checking it
 #+ against a standard word list file.
 #  This effectively eliminates gibberish and noise,
 #+ and outputs only recognized words.
 # ===========================================================
 #                 Standard Check for Script Argument(s)
 if [ $# -ne $ARGS ]
   echo "Usage: `basename $0` filename"
   exit $E_BADARGS
 if [ ! -f "$1" ]                      # Check if file exists.
     echo "File \"$1\" does not exist."
     exit $E_NOFILE
 # ===========================================================
 MINSTRLEN=3                           #  Minimum string length.
 WORDFILE=/usr/share/dict/linux.words  #  Dictionary file.
                                       #  May specify a different
                                       #+ word list file
                                       #+ of one-word-per-line format.
 wlist=`strings "$1" | tr A-Z a-z | tr '[:space:]' Z | \
 tr -cs '[:alpha:]' Z | tr -s '\173-\377' Z | tr Z ' '`
 # Translate output of 'strings' command with multiple passes of 'tr'.
 #  "tr A-Z a-z"  converts to lowercase.
 #  "tr '[:space:]'"  converts whitespace characters to Z's.
 #  "tr -cs '[:alpha:]' Z"  converts non-alphabetic characters to Z's,
 #+ and squeezes multiple consecutive Z's.
 #  "tr -s '\173-\377' Z"  converts all characters past 'z' to Z's
 #+ and squeezes multiple consecutive Z's,
 #+ which gets rid of all the weird characters that the previous
 #+ translation failed to deal with.
 #  Finally, "tr Z ' '" converts all those Z's to whitespace,
 #+ which will be seen as word separators in the loop below.
 #  ****************************************************************
 #  Note the technique of feeding the output of 'tr' back to itself,
 #+ but with different arguments and/or options on each pass.
 #  ****************************************************************
 for word in $wlist                    # Important:
                                       # $wlist must not be quoted here.
                                       # "$wlist" does not work.
                                       # Why not?
   strlen=${#word}                     # String length.
   if [ "$strlen" -lt "$MINSTRLEN" ]   # Skip over short strings.
   grep -Fw $word "$WORDFILE"          #  Match whole words only.
 #      ^^^                            #  "Fixed strings" and
                                       #+ "whole words" options. 
 exit $?


diff, patch

diff: flexible file comparison utility. It compares the target files line-by-line sequentially. In some applications, such as comparing word dictionaries, it may be helpful to filter the files through sort and uniq before piping them to diff. diff file-1 file-2 outputs the lines in the files that differ, with carets showing which file each particular line belongs to.

The --side-by-side option to diff outputs each compared file, line by line, in separate columns, with non-matching lines marked. The -c and -u options likewise make the output of the command easier to interpret.

There are available various fancy frontends for diff, such as spiff, wdiff, xdiff, and mgdiff.


The diff command returns an exit status of 0 if the compared files are identical, and 1 if they differ. This permits use of diff in a test construct within a shell script (see below).

A common use for diff is generating difference files to be used with patch The -e option outputs files suitable for ed or ex scripts.

patch: flexible versioning utility. Given a difference file generated by diff, patch can upgrade a previous version of a package to a newer version. It is much more convenient to distribute a relatively small "diff" file than the entire body of a newly revised package. Kernel "patches" have become the preferred method of distributing the frequent releases of the Linux kernel.

patch -p1 <patch-file
 # Takes all the changes listed in 'patch-file'
 # and applies them to the files referenced therein.
 # This upgrades to a newer version of the package.

Patching the kernel:

cd /usr/src
 gzip -cd patchXX.gz | patch -p0
 # Upgrading kernel source using 'patch'.
 # From the Linux kernel docs "README",
 # by anonymous author (Alan Cox?).


The diff command can also recursively compare directories (for the filenames present).

bash$ diff -r ~/notes1 ~/notes2
 Only in /home/bozo/notes1: file02
  Only in /home/bozo/notes1: file03
  Only in /home/bozo/notes2: file04


Use zdiff to compare gzipped files.


An extended version of diff that compares three files at a time. This command returns an exit value of 0 upon successful execution, but unfortunately this gives no information about the results of the comparison.

bash$ diff3 file-1 file-2 file-3
    This is line 1 of "file-1".
    This is line 1 of "file-2".
    This is line 1 of "file-3"


Compare and/or edit two files in order to merge them into an output file. Because of its interactive nature, this command would find little use in a script.


The cmp command is a simpler version of diff, above. Whereas diff reports the differences between two files, cmp merely shows at what point they differ.


Like diff, cmp returns an exit status of 0 if the compared files are identical, and 1 if they differ. This permits use in a test construct within a shell script.

Example 12-32. Using cmp to compare two files within a script.

 ARGS=2  # Two args to script expected.
 if [ $# -ne "$ARGS" ]
   echo "Usage: `basename $0` file1 file2"
   exit $E_BADARGS
 if [[ ! -r "$1" || ! -r "$2" ]]
   echo "Both files to be compared must exist and be readable."
 cmp $1 $2 &> /dev/null  # /dev/null buries the output of the "cmp" command.
 #   cmp -s $1 $2  has same result ("-s" silent flag to "cmp")
 #   Thank you  Anders Gustavsson for pointing this out.
 # Also works with 'diff', i.e.,   diff $1 $2 &> /dev/null
 if [ $? -eq 0 ]         # Test exit status of "cmp" command.
   echo "File \"$1\" is identical to file \"$2\"."
   echo "File \"$1\" differs from file \"$2\"."
 exit 0


Use zcmp on gzipped files.


Versatile file comparison utility. The files must be sorted for this to be useful.

comm -options first-file second-file

comm file-1 file-2 outputs three columns:

  • column 1 = lines unique to file-1

  • column 2 = lines unique to file-2

  • column 3 = lines common to both.

The options allow suppressing output of one or more columns.

  • -1 suppresses column 1

  • -2 suppresses column 2

  • -3 suppresses column 3

  • -12 suppresses both columns 1 and 2, etc.



Strips the path information from a file name, printing only the file name. The construction basename $0 lets the script know its name, that is, the name it was invoked by. This can be used for "usage" messages if, for example a script is called with missing arguments:
echo "Usage: `basename $0` arg1 arg2 ... argn"


Strips the basename from a filename, printing only the path information.


basename and dirname can operate on any arbitrary string. The argument does not need to refer to an existing file, or even be a filename for that matter (see Example A-7).

Example 12-33. basename and dirname

 echo "Basename of /home/bozo/daily-journal.txt = `basename $a`"
 echo "Dirname of /home/bozo/daily-journal.txt = `dirname $a`"
 echo "My own home is `basename ~/`."         # `basename ~` also works.
 echo "The home of my home is `dirname ~/`."  # `dirname ~`  also works.
 exit 0
split, csplit

These are utilities for splitting a file into smaller chunks. They are usually used for splitting up large files in order to back them up on floppies or preparatory to e-mailing or uploading them.

The csplit command splits a file according to context, the split occuring where patterns are matched.

sum, cksum, md5sum

These are utilities for generating checksums. A checksum is a number mathematically calculated from the contents of a file, for the purpose of checking its integrity. A script might refer to a list of checksums for security purposes, such as ensuring that the contents of key system files have not been altered or corrupted. For security applications, use the 128-bit md5sum (message digest 5 checksum) command.

bash$ cksum /boot/vmlinuz
 1670054224 804083 /boot/vmlinuz
 bash$ echo -n "Top Secret" | cksum
 3391003827 10
 bash$ md5sum /boot/vmlinuz
 0f43eccea8f09e0a0b2b5cf1dcf333ba  /boot/vmlinuz
 bash$ echo -n "Top Secret" | md5sum
 8babc97a6f62a4649716f4df8d61728f  -


The cksum command shows the size, in bytes, of its target, whether file or stdout.

The md5sum command displays a dash when it receives its input from stdout.

Example 12-34. Checking file integrity

 # Checking whether files in a given directory
 #                    have been tampered with.
 # Filename for storing records (database file).
 set_up_database ()
   echo ""$directory"" > "$dbfile"
   # Write directory name to first line of file.
   md5sum "$directory"/* >> "$dbfile"
   # Append md5 checksums and filenames.
 check_database ()
   local n=0
   local filename
   local checksum
   # ------------------------------------------- #
   #  This file check should be unnecessary,
   #+ but better safe than sorry.
   if [ ! -r "$dbfile" ]
     echo "Unable to read checksum database file!"
     exit $E_BAD_DBFILE
   # ------------------------------------------- #
   while read record[n]
     if [ "$directory_checked" != "$directory" ]
       echo "Directories do not match up!"
       # Tried to use file for a different directory.
       exit $E_DIR_NOMATCH
     if [ "$n" -gt 0 ]   # Not directory name.
       filename[n]=$( echo ${record[$n]} | awk '{ print $2 }' )
       #  md5sum writes records backwards,
       #+ checksum first, then filename.
       checksum[n]=$( md5sum "${filename[n]}" )
       if [ "${record[n]}" = "${checksum[n]}" ]
         echo "${filename[n]} unchanged."
       elif [ "`basename ${filename[n]}`" != "$dbfile" ]
              #  Skip over checksum database file,
              #+ as it will change with each invocation of script.
 	     #  ---
 	     #  This unfortunately means that when running
 	     #+ this script on $PWD, tampering with the
 	     #+ checksum database file will not be detected.
 	     #  Exercise: Fix this.
           echo "${filename[n]} : CHECKSUM ERROR!"
         # File has been changed since last checked.
     let "n+=1"
   done <"$dbfile"       # Read from checksum database file. 
 # =================================================== #
 # main ()
 if [ -z  "$1" ]
   directory="$PWD"      #  If not specified,
 else                    #+ use current working directory.
 clear                   # Clear screen.
 echo " Running file integrity check on $directory"
 # ------------------------------------------------------------------ #
   if [ ! -r "$dbfile" ] # Need to create database file?
     echo "Setting up database file, \""$directory"/"$dbfile"\"."; echo
 # ------------------------------------------------------------------ #
 check_database          # Do the actual work.
 #  You may wish to redirect the stdout of this script to a file,
 #+ especially if the directory checked has many files in it.
 exit 0
 #  For a much more thorough file integrity check,
 #+ consider the "Tripwire" package,

See also Example A-19 and Example 33-14 for creative uses of the md5sum command.


Securely erase a file by overwriting it multiple times with random bit patterns before deleting it. This command has the same effect as Example 12-54, but does it in a more thorough and elegant manner.

This is one of the GNU fileutils.


Advanced forensic technology may still be able to recover the contents of a file, even after application of shred.

Encoding and Encryption


This utility encodes binary files into ASCII characters, making them suitable for transmission in the body of an e-mail message or in a newsgroup posting.


This reverses the encoding, decoding uuencoded files back into the original binaries.

Example 12-35. Uudecoding encoded files

 # Uudecodes all uuencoded files in current working directory.
 lines=35        # Allow 35 lines for the header (very generous).
 for File in *   # Test all the files in $PWD.
   search1=`head -$lines $File | grep begin | wc -w`
   search2=`tail -$lines $File | grep end | wc -w`
   #  Uuencoded files have a "begin" near the beginning,
   #+ and an "end" near the end.
   if [ "$search1" -gt 0 ]
     if [ "$search2" -gt 0 ]
       echo "uudecoding - $File -"
       uudecode $File
 #  Note that running this script upon itself fools it
 #+ into thinking it is a uuencoded file,
 #+ because it contains both "begin" and "end".
 #  Exercise:
 #  --------
 #  Modify this script to check each file for a newsgroup header,
 #+ and skip to next if not found.
 exit 0


The fold -s command may be useful (possibly in a pipe) to process long uudecoded text messages downloaded from Usenet newsgroups.

mimencode, mmencode

The mimencode and mmencode commands process multimedia-encoded e-mail attachments. Although mail user agents (such as pine or kmail) normally handle this automatically, these particular utilities permit manipulating such attachments manually from the command line or in a batch by means of a shell script.


At one time, this was the standard UNIX file encryption utility. [2] Politically motivated government regulations prohibiting the export of encryption software resulted in the disappearance of crypt from much of the UNIX world, and it is still missing from most Linux distributions. Fortunately, programmers have come up with a number of decent alternatives to it, among them the author's very own cruft (see Example A-4).



Create a temporary file [3] with a "unique" filename. When invoked from the command line without additional arguments, it creates a zero-length file in the /tmp directory.

bash$ mktemp

 tempfile=`mktemp $PREFIX.XXXXXX`
 #                        ^^^^^^ Need at least 6 placeholders
 #+                              in the filename template.
 #   If no filename template supplied,
 #+ "tmp.XXXXXXXXXX" is the default.
 echo "tempfile name = $tempfile"
 # tempfile name = filename.QA2ZpY
 #                 or something similar...
 #  Creates a file of that name in the current working directory
 #+ with 600 file permissions.
 #  A "umask 177" is therefore unnecessary,
 #  but it's good programming practice anyhow.


Utility for building and compiling binary packages. This can also be used for any set of operations that is triggered by incremental changes in source files.

The make command checks a Makefile, a list of file dependencies and operations to be carried out.


Special purpose file copying command, similar to cp, but capable of setting permissions and attributes of the copied files. This command seems tailormade for installing software packages, and as such it shows up frequently in Makefiles (in the make install : section). It could likewise find use in installation scripts.


This utility, written by Benjamin Lin and collaborators, converts DOS-formatted text files (lines terminated by CR-LF) to UNIX format (lines terminated by LF only), and vice-versa.


The ptx [targetfile] command outputs a permuted index (cross-reference list) of the targetfile. This may be further filtered and formatted in a pipe, if necessary.

more, less

Pagers that display a text file or stream to stdout, one screenful at a time. These may be used to filter the output of stdout . . . or of a script.

An interesting application of more is to "test drive" a command sequence, to forestall potentially unpleasant consequences.
ls /home/bozo | awk '{print "rm -rf " $1}' | more
 #                                            ^^^^
 # Testing the effect of the following (disastrous) command line:
 #      ls /home/bozo | awk '{print "rm -rf " $1}' | sh
 #      Hand off to the shell to execute . . .       ^^



A tar czvf archive_name.tar.gz * will include dotfiles in directories below the current working directory. This is an undocumented GNU tar "feature".


This is a symmetric block cipher, used to encrypt files on a single system or local network, as opposed to the "public key" cipher class, of which pgp is a well-known example.


Creates a temporary directory when invoked with the -d option.

Miscellaneous Commands

Command that fit in no special category

jot, seq

These utilities emit a sequence of integers, with a user-selected increment.

The normal separator character between each integer is a newline, but this can be changed with the -s option.

bash$ seq 5
 bash$ seq -s : 5

Both jot and seq come in handy in a for loop.

Example 12-48. Using seq to generate loop arguments

 # Using "seq"
 for a in `seq 80`  # or   for a in $( seq 80 )
 # Same as   for a in 1 2 3 4 5 ... 80   (saves much typing!).
 # May also use 'jot' (if present on system).
   echo -n "$a "
 done      # 1 2 3 4 5 ... 80
 # Example of using the output of a command to generate 
 # the [list] in a "for" loop.
 echo; echo
 COUNT=80  # Yes, 'seq' may also take a replaceable parameter.
 for a in `seq $COUNT`  # or   for a in $( seq $COUNT )
   echo -n "$a "
 done      # 1 2 3 4 5 ... 80
 echo; echo
 for a in `seq $BEGIN $END`
 #  Giving "seq" two arguments starts the count at the first one,
 #+ and continues until it reaches the second.
   echo -n "$a "
 done      # 75 76 77 78 79 80
 echo; echo
 for a in `seq $BEGIN $INTERVAL $END`
 #  Giving "seq" three arguments starts the count at the first one,
 #+ uses the second for a step interval,
 #+ and continues until it reaches the third.
   echo -n "$a "
 done      # 45 50 55 60 65 70 75 80
 echo; echo
 exit 0

Example 12-49. Letter Count"

 # Counting letter occurrences in a text file.
 # Written by Stefano Palmeri.
 # Used in ABS Guide with permission.
 # Slightly modified by document author.
 MINARGS=2          # Script requires at least two arguments.
 let LETTERS=$#-1   # How many letters specified (as command-line args).
                    # (Subtract 1 from number of command line args.)
            echo Usage: `basename $0` file letters  
            echo Note: `basename $0` arguments are case sensitive.
            echo Example: `basename $0` foobar.txt G n U L i N U x.
 # Checks number of arguments.
 if [ $# -lt $MINARGS ]; then
    echo "Not enough arguments."
    exit $E_BADARGS
 # Checks if file exists.
 if [ ! -f $FILE ]; then
     echo "File \"$FILE\" does not exist."
     exit $E_BADARGS
 # Counts letter occurrences .
 for n in `seq $LETTERS`; do
       if [[ `echo -n "$1" | wc -c` -eq 1 ]]; then             #  Checks arg.
              echo "$1" -\> `cat $FILE | tr -cd  "$1" | wc -c` #  Counting.
              echo "$1 is not a  single char."
 exit $?
 #  This script has exactly the same functionality as,
 #+ but executes faster.
 #  Why?

The getopt command parses command-line options preceded by a dash. This external command corresponds to the getopts Bash builtin. Using getopt permits handling long options by means of the -l flag, and this also allows parameter reshuffling.

Example 12-50. Using getopt to parse command-line options

 # Using getopt.
 # Try the following when invoking this script:
 #   sh -a
 #   sh -abc
 #   sh -a -b -c
 #   sh -d
 #   sh -dXYZ
 #   sh -d XYZ
 #   sh -abcd
 #   sh -abcdZ
 #   sh -z
 #   sh a
 # Explain the results of each of the above.
 if [ "$#" -eq 0 ]
 then   # Script needs at least one command-line argument.
   echo "Usage $0 -[options a,b,c]"
   exit $E_OPTERR
 set -- `getopt "abcd:" "$@"`
 # Sets positional parameters to command-line arguments.
 # What happens if you use "$*" instead of "$@"?
 while [ ! -z "$1" ]
   case "$1" in
     -a) echo "Option \"a\"";;
     -b) echo "Option \"b\"";;
     -c) echo "Option \"c\"";;
     -d) echo "Option \"d\" $2";;
      *) break;;
 #  It is usually better to use the 'getopts' builtin in a script,
 #+ rather than 'getopt'.
 #  See "".
 exit 0

See Example 9-12 for a simplified emulation of getopt.


The run-parts command [1] executes all the scripts in a target directory, sequentially in ASCII-sorted filename order. Of course, the scripts need to have execute permission.

The cron daemon invokes run-parts to run the scripts in the /etc/cron.* directories.


In its default behavior the yes command feeds a continuous string of the character y followed by a line feed to stdout. A control-c terminates the run. A different output string may be specified, as in yes different string, which would continually output different string to stdout. One might well ask the purpose of this. From the command line or in a script, the output of yes can be redirected or piped into a program expecting user input. In effect, this becomes a sort of poor man's version of expect.

yes | fsck /dev/hda1 runs fsck non-interactively (careful!).

yes | rm -r dirname has same effect as rm -rf dirname (careful!).


Caution advised when piping yes to a potentially dangerous system command, such as fsck or fdisk. It may have unintended side-effects.


Prints arguments as a large vertical banner to stdout, using an ASCII character (default '#'). This may be redirected to a printer for hardcopy.


Show all the environmental variables set for a particular user.

bash$ printenv | grep HOME


The lp and lpr commands send file(s) to the print queue, to be printed as hard copy. [2] These commands trace the origin of their names to the line printers of another era.

bash$ lp file1.txt or bash lp <file1.txt

It is often useful to pipe the formatted output from pr to lp.

bash$ pr -options file1.txt | lp

Formatting packages, such as groff and Ghostscript may send their output directly to lp.

bash$ groff -Tascii | lp

bash$ gs -options | lp

Related commands are lpq, for viewing the print queue, and lprm, for removing jobs from the print queue.


[UNIX borrows an idea here from the plumbing trade.]

This is a redirection operator, but with a difference. Like the plumber's "tee," it permits "siponing off" to a file the output of a command or commands within a pipe, but without affecting the result. This is useful for printing an ongoing process to a file or paper, perhaps to keep track of it for debugging purposes.

                  |------> to file
   command--->----|-operator-->---> result of command(s)

cat listfile* | sort | tee check.file | uniq > result.file
(The file check.file contains the concatenated sorted "listfiles", before the duplicate lines are removed by uniq.)


This obscure command creates a named pipe, a temporary first-in-first-out buffer for transferring data between processes. [3] Typically, one process writes to the FIFO, and the other reads from it. See Example A-15.


This command checks the validity of a filename. If the filename exceeds the maximum allowable length (255 characters) or one or more of the directories in its path is not searchable, then an error message results.

Unfortunately, pathchk does not return a recognizable error code, and it is therefore pretty much useless in a script. Consider instead the file test operators.


This is the somewhat obscure and much feared "data duplicator" command. Originally a utility for exchanging data on magnetic tapes between UNIX minicomputers and IBM mainframes, this command still has its uses. The dd command simply copies a file (or stdin/stdout), but with conversions. Possible conversions are ASCII/EBCDIC, [4] upper/lower case, swapping of byte pairs between input and output, and skipping and/or truncating the head or tail of the input file. A dd --help lists the conversion and other options that this powerful utility takes.

# Converting a file to all uppercase:
 dd if=$filename conv=ucase > $filename.uppercase
 #                    lcase   # For lower case conversion

Example 12-51. A script that copies itself

 # This script copies itself.
 dd if=$0 of=$0.$file_subscript 2>/dev/null
 # Suppress messages from dd:   ^^^^^^^^^^^
 exit $?

Example 12-52. Exercising dd

 # Script by Stephane Chazelas.
 # Somewhat modified by document author.
 input_file=$0   # This script.
 dd if=$input_file of=$output_file bs=1 skip=$((n-1)) count=$((p-n+1)) 2> /dev/null
 # Extracts characters n to p from this script.
 # -------------------------------------------------------
 echo -n "hello world" | dd cbs=1 conv=unblock 2> /dev/null
 # Echoes "hello world" vertically.
 exit 0

To demonstrate just how versatile dd is, let's use it to capture keystrokes.

Example 12-53. Capturing Keystrokes

 # Capture keystrokes without needing to press ENTER.
 keypresses=4                      # Number of keypresses to capture.
 old_tty_setting=$(stty -g)        # Save old terminal settings.
 echo "Press $keypresses keys."
 stty -icanon -echo                # Disable canonical mode.
                                   # Disable local echo.
 keys=$(dd bs=1 count=$keypresses 2> /dev/null)
 # 'dd' uses stdin, if "if" (input file) not specified.
 stty "$old_tty_setting"           # Restore old terminal settings.
 echo "You pressed the \"$keys\" keys."
 # Thanks, Stephane Chazelas, for showing the way.
 exit 0

The dd command can do random access on a data stream.
echo -n . | dd bs=1 seek=4 of=file conv=notrunc
 # The "conv=notrunc" option means that the output file will not be truncated.		
 # Thanks, S.C.

The dd command can copy raw data and disk images to and from devices, such as floppies and tape drives (Example A-5). A common use is creating boot floppies.

dd if=kernel-image of=/dev/fd0H1440

Similarly, dd can copy the entire contents of a floppy, even one formatted with a "foreign" OS, to the hard drive as an image file.

dd if=/dev/fd0 of=/home/bozo/projects/floppy.img

Other applications of dd include initializing temporary swap files (Example 28-2) and ramdisks (Example 28-3). It can even do a low-level copy of an entire hard drive partition, although this is not necessarily recommended.

People (with presumably nothing better to do with their time) are constantly thinking of interesting applications of dd.

Example 12-54. Securely deleting a file

 # Erase "all" traces of a file.
 #  This script overwrites a target file alternately
 #+ with random bytes, then zeros before finally deleting it.
 #  After that, even examining the raw disk sectors by conventional methods
 #+ will not reveal the original file data.
 PASSES=7         #  Number of file-shredding passes.
                  #  Increasing this slows script execution,
                  #+ especially on large target files.
 BLOCKSIZE=1      #  I/O with /dev/urandom requires unit block size,
                  #+ otherwise you get weird results.
 E_BADARGS=70     #  Various error exit codes.
 if [ -z "$1" ]   # No filename specified.
   echo "Usage: `basename $0` filename"
   exit $E_BADARGS
 if [ ! -e "$file" ]
   echo "File \"$file\" not found."
   exit $E_NOT_FOUND
 echo; echo -n "Are you absolutely sure you want to blot out \"$file\" (y/n)? "
 read answer
 case "$answer" in
 [nN]) echo "Changed your mind, huh?"
       exit $E_CHANGED_MIND
 *)    echo "Blotting out file \"$file\".";;
 flength=$(ls -l "$file" | awk '{print $5}')  # Field 5 is file length.
 chmod u+w "$file"   # Allow overwriting/deleting the file.
 while [ "$pass_count" -le "$PASSES" ]
   echo "Pass #$pass_count"
   sync         # Flush buffers.
   dd if=/dev/urandom of=$file bs=$BLOCKSIZE count=$flength
                # Fill with random bytes.
   sync         # Flush buffers again.
   dd if=/dev/zero of=$file bs=$BLOCKSIZE count=$flength
                # Fill with zeros.
   sync         # Flush buffers yet again.
   let "pass_count += 1"
 rm -f $file    # Finally, delete scrambled and shredded file.
 sync           # Flush buffers a final time.
 echo "File \"$file\" blotted out and deleted."; echo
 exit 0
 #  This is a fairly secure, if inefficient and slow method
 #+ of thoroughly "shredding" a file.
 #  The "shred" command, part of the GNU "fileutils" package,
 #+ does the same thing, although more efficiently.
 #  The file cannot not be "undeleted" or retrieved by normal methods.
 #  However . . .
 #+ this simple method would *not* likely withstand
 #+ sophisticated forensic analysis.
 #  This script may not play well with a journaled file system.
 #  Exercise (difficult): Fix it so it does.
 #  Tom Vier's "wipe" file-deletion package does a much more thorough job
 #+ of file shredding than this simple script.
 #  For an in-depth analysis on the topic of file deletion and security,
 #+ see Peter Gutmann's paper,
 #+     "Secure Deletion of Data From Magnetic and Solid-State Memory".

The od, or octal dump filter converts input (or files) to octal (base-8) or other bases. This is useful for viewing or processing binary data files or otherwise unreadable system device files, such as /dev/urandom, and as a filter for binary data. See Example 9-28 and Example 12-13.


Performs a hexadecimal, octal, decimal, or ASCII dump of a binary file. This command is the rough equivalent of od, above, but not nearly as useful.


Displays information about an object file or binary executable in either hexadecimal form or as a disassembled listing (with the -d option).

bash$ objdump -d /bin/ls
 /bin/ls:     file format elf32-i386
  Disassembly of section .init:
  080490bc <.init>:
   80490bc:       55                      push   %ebp
   80490bd:       89 e5                   mov    %esp,%ebp
   . . .


This command generates a "magic cookie", a 128-bit (32-character) pseudorandom hexadecimal number, normally used as an authorization "signature" by the X server. This also available for use in a script as a "quick 'n dirty" random number.

Of course, a script could use md5 for the same purpose.
# Generate md5 checksum on the script itself.
 random001=`md5sum $0 | awk '{print $1}'`
 # Uses 'awk' to strip off the filename.

The mcookie command gives yet another way to generate a "unique" filename.

Example 12-55. Filename generator

 #  temp filename generator
 BASE_STR=`mcookie`   # 32-character magic cookie.
 POS=11               # Arbitrary position in magic cookie string.
 LEN=5                # Get $LEN consecutive characters.
 prefix=temp          #  This is, after all, a "temp" file.
                      #  For more "uniqueness," generate the filename prefix
                      #+ using the same method as the suffix, below.
                      # Extract a 5-character string, starting at position 11.
                      # Construct the filename.
 echo "Temp filename = "$temp_filename""
 # sh
 # Temp filename = temp.e19ea
 #  Compare this method of generating "unique" filenames
 #+ with the 'date' method in
 exit 0

This utility converts between different units of measure. While normally invoked in interactive mode, units may find use in a script.

Example 12-56. Converting meters to miles

 convert_units ()  # Takes as arguments the units to convert.
   cf=$(units "$1" "$2" | sed --silent -e '1p' | awk '{print $2}')
   # Strip off everything except the actual conversion factor.
   echo "$cf"
 cfactor=`convert_units $Unit1 $Unit2`
 result=$(echo $quantity*$cfactor | bc)
 echo "There are $result $Unit2 in $quantity $Unit1."
 #  What happens if you pass incompatible units,
 #+ such as "acres" and "miles" to the function?
 exit 0

A hidden treasure, m4 is a powerful macro processing filter, [5] virtually a complete language. Although originally written as a pre-processor for RatFor, m4 turned out to be useful as a stand-alone utility. In fact, m4 combines some of the functionality of eval, tr, and awk, in addition to its extensive macro expansion facilities.

The April, 2002 issue of Linux Journal has a very nice article on m4 and its uses.

Example 12-57. Using m4

 # Using the m4 macro processor
 # Strings
 echo "len($string)" | m4                           # 7
 echo "substr($string,4)" | m4                      # A01
 echo "regexp($string,[0-1][0-1],\&Z)" | m4         # 01Z
 # Arithmetic
 echo "incr(22)" | m4                               # 23
 echo "eval(99 / 3)" | m4                           # 33
 exit 0

The doexec command enables passing an arbitrary list of arguments to a binary executable. In particular, passing argv[0] (which corresponds to $0 in a script) lets the executable be invoked by various names, and it can then carry out different sets of actions, according to the name by which it was called. What this amounts to is roundabout way of passing options to an executable.

For example, the /usr/local/bin directory might contain a binary called "aaa". Invoking doexec /usr/local/bin/aaa list would list all those files in the current working directory beginning with an "a", while invoking (the same executable with) doexec /usr/local/bin/aaa delete would delete those files.


The various behaviors of the executable must be defined within the code of the executable itself, analogous to something like the following in a shell script:
case `basename $0` in
 "name1" ) do_something;;
 "name2" ) do_something_else;;
 "name3" ) do_yet_another_thing;;
 *       ) bail_out;;


The dialog family of tools provide a method of calling interactive "dialog" boxes from a script. The more elaborate variations of dialog -- gdialog, Xdialog, and kdialog -- actually invoke X-Windows widgets. See Example 33-19.


The sox, or "sound exchange" command plays and performs transformations on sound files. In fact, the /usr/bin/play executable (now deprecated) is nothing but a shell wrapper for sox.

For example, sox soundfile.wav changes a WAV sound file into a (Sun audio format) AU sound file.

Shell scripts are ideally suited for batch processing sox operations on sound files. For examples, see the Linux Radio Timeshift HOWTO and the MP3do Project.



This is actually a script adapted from the Debian Linux distribution.


The print queue is the group of jobs "waiting in line" to be printed.


For an excellent overview of this topic, see Andy Vaught's article, Introduction to Named Pipes, in the September, 1997 issue of Linux Journal.


EBCDIC (pronounced "ebb-sid-ick") is an acronym for Extended Binary Coded Decimal Interchange Code. This is an IBM data format no longer in much use. A bizarre application of the conv=ebcdic option of dd is as a quick 'n easy, but not very secure text file encoder.
cat $file | dd conv=swab,ebcdic > $file_encrypted
 # Encode (looks like gibberish).		    
 # Might as well switch bytes (swab), too, for a little extra obscurity.
 cat $file_encrypted | dd conv=swab,ascii > $file_plaintext
 # Decode.


A macro is a symbolic constant that expands into a command string or a set of operations on parameters.

System and Administrative Commands

The startup and shutdown scripts in /etc/rc.d illustrate the uses (and usefulness) of many of these comands. These are usually invoked by root and used for system maintenance or emergency filesystem repairs. Use with caution, as some of these commands may damage your system if misused.

Users and Groups


Show all logged on users. This is the approximate equivalent of who -q.


Lists the current user and the groups she belongs to. This corresponds to the $GROUPS internal variable, but gives the group names, rather than the numbers.

bash$ groups
 bozita cdrom cdwriter audio xgrp
 bash$ echo $GROUPS
chown, chgrp

The chown command changes the ownership of a file or files. This command is a useful method that root can use to shift file ownership from one user to another. An ordinary user may not change the ownership of files, not even her own files. [1]

root# chown bozo *.txt

The chgrp command changes the group ownership of a file or files. You must be owner of the file(s) as well as a member of the destination group (or root) to use this operation.
chgrp --recursive dunderheads *.data
 #  The "dunderheads" group will now own all the "*.data" files
 #+ all the way down the $PWD directory tree (that's what "recursive" means).

useradd, userdel

The useradd administrative command adds a user account to the system and creates a home directory for that particular user, if so specified. The corresponding userdel command removes a user account from the system [2] and deletes associated files.


The adduser command is a synonym for useradd and is usually a symbolic link to it.


Modify a user account. Changes may be made to the password, group membership, expiration date, and other attributes of a given user's account. With this command, a user's password may be locked, which has the effect of disabling the account.


Modify a given group. The group name and/or ID number may be changed using this command.


The id command lists the real and effective user IDs and the group IDs of the user associated with the current process. This is the counterpart to the $UID, $EUID, and $GROUPS internal Bash variables.

bash$ id
 uid=501(bozo) gid=501(bozo) groups=501(bozo),22(cdrom),80(cdwriter),81(audio)
 bash$ echo $UID


The id command shows the effective IDs only when they differ from the real ones.

Also see Example 9-5.


Show all users logged on to the system.

bash$ who
 bozo  tty1     Apr 27 17:45
  bozo  pts/0    Apr 27 17:46
  bozo  pts/1    Apr 27 17:47
  bozo  pts/2    Apr 27 17:49

The -m gives detailed information about only the current user. Passing any two arguments to who is the equivalent of who -m, as in who am i or who The Man.

bash$ who -m
 localhost.localdomain!bozo  pts/2    Apr 27 17:49

whoami is similar to who -m, but only lists the user name.

bash$ whoami


Show all logged on users and the processes belonging to them. This is an extended version of who. The output of w may be piped to grep to find a specific user and/or process.

bash$ w | grep startx
 bozo  tty1     -                 4:22pm  6:41   4.47s  0.45s  startx

Show current user's login name (as found in /var/run/utmp). This is a near-equivalent to whoami, above.

bash$ logname
 bash$ whoami


bash$ su
 Password: ......
 bash# whoami
 bash# logname


While logname prints the name of the logged in user, whoami gives the name of the user attached to the current process. As we have just seen, sometimes these are not the same.


Runs a program or script as a substitute user. su rjones starts a shell as user rjones. A naked su defaults to root. See Example A-15.


Runs a command as root (or another user). This may be used in a script, thus permitting a regular user to run the script.

 # Some commands.
 sudo cp /root/secretfile /home/bozo/secret
 # Some more commands.

The file /etc/sudoers holds the names of users permitted to invoke sudo.


Sets, changes, or manages a user's password.

The passwd command can be used in a script, but should not be.

Example 13-1. Setting a new password

 # For demonstration purposes only.
 #                      Not a good idea to actually run this script.
 #  This script must be run as root.
 ROOT_UID=0         # Root has $UID 0.
 E_WRONG_USER=65    # Not root?
 if [ "$UID" -ne "$ROOT_UID" ]
   echo; echo "Only root can run this script."; echo
   exit $E_WRONG_USER
   echo "You should know better than to run this script, root."
   echo "Even root users get the blues... "
 # Check if bozo lives here.
 grep -q "$username" /etc/passwd
 if [ $? -ne $SUCCESS ]
   echo "User $username does not exist."
   echo "No password changed."
 echo "$NEWPASSWORD" | passwd --stdin "$username"
 #  The '--stdin' option to 'passwd' permits
 #+ getting a new password from stdin (or a pipe).
 echo; echo "User $username's password changed!"
 # Using the 'passwd' command in a script is dangerous.
 exit 0

The passwd command's -l, -u, and -d options permit locking, unlocking, and deleting a user's password. Only root may use these options.


Show users' logged in time, as read from /var/log/wtmp. This is one of the GNU accounting utilities.

bash$ ac
         total       68.08

List last logged in users, as read from /var/log/wtmp. This command can also show remote logins.

For example, to show the last few times the system rebooted:

bash$ last reboot
 reboot   system boot  2.6.9-1.667      Fri Feb  4 18:18          (00:02)    
  reboot   system boot  2.6.9-1.667      Fri Feb  4 15:20          (01:27)    
  reboot   system boot  2.6.9-1.667      Fri Feb  4 12:56          (00:49)    
  reboot   system boot  2.6.9-1.667      Thu Feb  3 21:08          (02:17)    
  . . .
  wtmp begins Tue Feb  1 12:50:09 2005

Change user's group ID without logging out. This permits access to the new group's files. Since users may be members of multiple groups simultaneously, this command finds little use.



Echoes the name of the current user's terminal. Note that each separate xterm window counts as a different terminal.

bash$ tty

Shows and/or changes terminal settings. This complex command, used in a script, can control terminal behavior and the way output displays. See the info page, and study it carefully.

Example 13-2. Setting an erase character

 # Using "stty" to set an erase character when reading input.
 echo -n "What is your name? "
 read name                      #  Try to backspace
                                #+ to erase characters of input.
                                #  Problems?
 echo "Your name is $name."
 stty erase '#'                 #  Set "hashmark" (#) as erase character.
 echo -n "What is your name? "
 read name                      #  Use # to erase last character typed.
 echo "Your name is $name."
 # Warning: Even after the script exits, the new key value remains set.
 exit 0

Example 13-3. secret password: Turning off terminal echoing

 # secret password
 echo -n "Enter password "
 read passwd
 echo "password is $passwd"
 echo -n "If someone had been looking over your shoulder, "
 echo "your password would have been compromised."
 echo && echo  # Two line-feeds in an "and list."
 stty -echo    # Turns off screen echo.
 echo -n "Enter password again "
 read passwd
 echo "password is $passwd"
 stty echo     # Restores screen echo.
 exit 0
 # Do an 'info stty' for more on this useful-but-tricky command.

A creative use of stty is detecting a user keypress (without hitting ENTER).

Example 13-4. Keypress detection

 # Detect a user keypress ("hot keys").
 old_tty_settings=$(stty -g)   # Save old settings (why?).
 stty -icanon
 Keypress=$(head -c1)          # or $(dd bs=1 count=1 2> /dev/null)
                               # on non-GNU systems
 echo "Key pressed was \""$Keypress"\"."
 stty "$old_tty_settings"      # Restore old settings.
 # Thanks, Stephane Chazelas.
 exit 0

Also see Example 9-3.


Set certain terminal attributes. This command writes to its terminal's stdout a string that changes the behavior of that terminal.

bash$ setterm -cursor off

The setterm command can be used within a script to change the appearance of text written to stdout, although there are certainly better tools available for this purpose.

setterm -bold on
 echo bold hello
 setterm -bold off
 echo normal hello


Show or initialize terminal settings. This is a less capable version of stty.

bash$ tset -r
 Terminal type is xterm-xfree86.
  Kill is control-U (^U).
  Interrupt is control-C (^C).


Set or display serial port parameters. This command must be run by root user and is usually found in a system setup script.

# From /etc/pcmcia/serial script:
 IRQ=`setserial /dev/$DEVICE | sed -e 's/.*IRQ: //'`
 setserial /dev/$DEVICE irq 0 ; setserial /dev/$DEVICE irq $IRQ

getty, agetty

The initialization process for a terminal uses getty or agetty to set it up for login by a user. These commands are not used within user shell scripts. Their scripting counterpart is stty.


Enables or disables write access to the current user's terminal. Disabling access would prevent another user on the network to write to the terminal.


It can be very annoying to have a message about ordering pizza suddenly appear in the middle of the text file you are editing. On a multi-user network, you might therefore wish to disable write access to your terminal when you need to avoid interruptions.


This is an acronym for "write all", i.e., sending a message to all users at every terminal logged into the network. It is primarily a system administrator's tool, useful, for example, when warning everyone that the system will shortly go down due to a problem (see Example 17-1).

bash$ wall System going down for maintenance in 5 minutes!
 Broadcast message from bozo (pts/1) Sun Jul  8 13:53:27 2001...
  System going down for maintenance in 5 minutes!


If write access to a particular terminal has been disabled with mesg, then wall cannot send a message to it.


Lists all system bootup messages to stdout. Handy for debugging and ascertaining which device drivers were installed and which system interrupts in use. The output of dmesg may, of course, be parsed with grep, sed, or awk from within a script.

bash$ dmesg | grep hda
 Kernel command line: ro root=/dev/hda2
  hda: IBM-DLGA-23080, ATA DISK drive
  hda: 6015744 sectors (3080 MB) w/96KiB Cache, CHS=746/128/63
  hda: hda1 hda2 hda3 < hda5 hda6 hda7 > hda4

Information and Statistics


Output system specifications (OS, kernel version, etc.) to stdout. Invoked with the -a option, gives verbose system info (see Example 12-5). The -s option shows only the OS type.

bash$ uname -a
 Linux localhost.localdomain 2.2.15-2.5.0 #1 Sat Feb 5 00:13:43 EST 2000 i686 unknown
 bash$ uname -s

Show system architecture. Equivalent to uname -m. See Example 10-26.

bash$ arch
 bash$ uname -m

Gives information about previous commands, as stored in the /var/account/pacct file. Command name and user name can be specified by options. This is one of the GNU accounting utilities.


List the last login time of all system users. This references the /var/log/lastlog file.

bash$ lastlog
 root          tty1                      Fri Dec  7 18:43:21 -0700 2001
  bin                                     **Never logged in**
  daemon                                  **Never logged in**
  bozo          tty1                      Sat Dec  8 21:14:29 -0700 2001
 bash$ lastlog | grep root
 root          tty1                      Fri Dec  7 18:43:21 -0700 2001


This command will fail if the user invoking it does not have read permission for the /var/log/lastlog file.


List open files. This command outputs a detailed table of all currently open files and gives information about their owner, size, the processes associated with them, and more. Of course, lsof may be piped to grep and/or awk to parse and analyze its results.

bash$ lsof
  init         1    root  mem    REG        3,5   30748    30303 /sbin/init
  init         1    root  mem    REG        3,5   73120     8069 /lib/
  init         1    root  mem    REG        3,5  931668     8075 /lib/
  cardmgr    213    root  mem    REG        3,5   36956    30357 /sbin/cardmgr


Diagnostic and debugging tool for tracing system calls and signals. The simplest way of invoking it is strace COMMAND.

bash$ strace df
 execve("/bin/df", ["df"], [/* 45 vars */]) = 0
  uname({sys="Linux", node="bozo.localdomain", ...}) = 0
  brk(0)                                  = 0x804f5e4

This is the Linux equivalent of truss.


Network port scanner. This command scans a server to locate open ports and the services associated with those ports. It is an important security tool for locking down a network against hacking attempts.

 SERVER=$HOST                           # localhost.localdomain (
 PORT_NUMBER=25                         # SMTP port.
 nmap $SERVER | grep -w "$PORT_NUMBER"  # Is that particular port open?
 #              grep -w matches whole words only,
 #+             so this wouldn't match port 1025, for example.
 exit 0
 # 25/tcp     open        smtp


The nc (netcat) utility is a complete toolkit for connecting to and listening to TCP and UDP ports. It is useful as a diagnostic and testing tool and as a component in simple script-based HTTP clients and servers.

bash$ nc localhost.localdomain 25
 220 localhost.localdomain ESMTP Sendmail 8.13.1/8.13.1; Thu, 31 Mar 2005 15:41:35 -0700

Example 13-5. Checking a remote server for identd

#! /bin/sh
 ## Duplicate DaveG's ident-scan thingie using netcat. Oooh, he'll be p*ssed.
 ## Args: target port [port port port ...]
 ## Hose stdout _and_ stderr together.
 ##  Advantages: runs slower than ident-scan, giving remote inetd less cause
 ##+ for alarm, and only hits the few known daemon ports you specify.
 ##  Disadvantages: requires numeric-only port args, the output sleazitude,
 ##+ and won't work for r-services when coming from high source ports.
 # Script author: Hobbit <>
 # Used in ABS Guide with permission.
 # ---------------------------------------------------
 E_BADARGS=65       # Need at least two args.
 TWO_WINKS=2        # How long to sleep.
 IDPORT=113         # Authentication "tap ident" port.
 # ---------------------------------------------------
 case "${2}" in
   "" ) echo "Need HOST and at least one PORT." ; exit $E_BADARGS ;;
 # Ping 'em once and see if they *are* running identd.
 nc -z -w $TIMEOUT0 "$1" $IDPORT || { echo "Oops, $1 isn't running identd." ; exit 0 ; }
 #  -z scans for listening daemons.
 #     -w $TIMEOUT = How long to try to connect.
 # Generate a randomish base port.
 RP=`expr $$ % $RAND1 + $RAND2`
 while test "$1" ; do
   nc -v -w $TIMEOUT1 -p ${RP} "$TRG" ${1} < /dev/null > /dev/null &
   sleep $THREE_WINKS
   echo "${1},${RP}" | nc -w $TIMEOUT2 -r "$TRG" $IDPORT 2>&1
   sleep $TWO_WINKS
 # Does this look like a lamer script or what . . . ?
 # ABS Guide author comments: "It ain't really all that bad,
 #+                            rather clever, actually."
   kill -HUP $PROC
   RP=`expr ${RP} + 1`
 exit $?
 #  Notes:
 #  -----
 #  Try commenting out line 30 and running this script
 #+ with "localhost.localdomain 25" as arguments.
 #  For more of Hobbit's 'nc' example scripts,
 #+ look in the documentation:
 #+ the /usr/share/doc/nc-X.XX/scripts directory.

And, of course, there's Dr. Andrew Tridgell's notorious one-line script in the BitKeeper Affair:
echo clone | nc 5000 > e2fsprogs.dat


Shows memory and cache usage in tabular form. The output of this command lends itself to parsing, using grep, awk or Perl. The procinfo command shows all the information that free does, and much more.

bash$ free
                 total       used       free     shared    buffers     cached
    Mem:         30504      28624       1880      15820       1608       16376
    -/+ buffers/cache:      10640      19864
    Swap:        68540       3128      65412

To show unused RAM memory:

bash$ free | grep Mem | awk '{ print $4 }'

Extract and list information and statistics from the /proc pseudo-filesystem. This gives a very extensive and detailed listing.

bash$ procinfo | grep Bootup
 Bootup: Wed Mar 21 15:15:50 2001    Load average: 0.04 0.21 0.34 3/47 6829

List devices, that is, show installed hardware.

bash$ lsdev
 Device            DMA   IRQ  I/O Ports
  cascade             4     2 
  dma                          0080-008f
  dma1                         0000-001f
  dma2                         00c0-00df
  fpu                          00f0-00ff
  ide0                     14  01f0-01f7 03f6-03f6


Show (disk) file usage, recursively. Defaults to current working directory, unless otherwise specified.

bash$ du -ach
 1.0k    ./
  1.0k    ./
  1.0k    ./random.file
  6.0k    .
  6.0k    total

Shows filesystem usage in tabular form.

bash$ df
 Filesystem           1k-blocks      Used Available Use% Mounted on
  /dev/hda5               273262     92607    166547  36% /
  /dev/hda8               222525    123951     87085  59% /home
  /dev/hda7              1408796   1075744    261488  80% /usr

Gives detailed and verbose statistics on a given file (even a directory or device file) or set of files.

bash$ stat test.cru
   File: "test.cru"
    Size: 49970        Allocated Blocks: 100          Filetype: Regular File
    Mode: (0664/-rw-rw-r--)         Uid: (  501/ bozo)  Gid: (  501/ bozo)
  Device:  3,8   Inode: 18185     Links: 1    
  Access: Sat Jun  2 16:40:24 2001
  Modify: Sat Jun  2 16:40:24 2001
  Change: Sat Jun  2 16:40:24 2001

If the target file does not exist, stat returns an error message.

bash$ stat nonexistent-file
 nonexistent-file: No such file or directory


Display virtual memory statistics.

bash$ vmstat
    procs                      memory    swap          io system         cpu
  r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy id
  0  0  0      0  11040   2636  38952   0   0    33     7  271    88   8   3 89


Show current network statistics and information, such as routing tables and active connections. This utility accesses information in /proc/net (Chapter 27). See Example 27-3.

netstat -r is equivalent to route.

bash$ netstat
 Active Internet connections (w/o servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State      
  Active UNIX domain sockets (w/o servers)
  Proto RefCnt Flags       Type       State         I-Node Path
  unix  11     [ ]         DGRAM                    906    /dev/log
  unix  3      [ ]         STREAM     CONNECTED     4514   /tmp/.X11-unix/X0
  unix  3      [ ]         STREAM     CONNECTED     4513
  . . .

Shows how long the system has been running, along with associated statistics.

bash$ uptime
 10:28pm  up  1:57,  3 users,  load average: 0.17, 0.34, 0.27

Lists the system's host name. This command sets the host name in an /etc/rc.d setup script (/etc/rc.d/rc.sysinit or similar). It is equivalent to uname -n, and a counterpart to the $HOSTNAME internal variable.

bash$ hostname
 bash$ echo $HOSTNAME

Similar to the hostname command are the domainname, dnsdomainname, nisdomainname, and ypdomainname commands. Use these to display or set the system DNS or NIS/YP domain name. Various options to hostname also perform these functions.


Echo a 32-bit hexadecimal numerical identifier for the host machine.

bash$ hostid


This command allegedly fetches a "unique" serial number for a particular system. Certain product registration procedures use this number to brand a particular user license. Unfortunately, hostid only returns the machine network address in hexadecimal, with pairs of bytes transposed.

The network address of a typical non-networked Linux machine, is found in /etc/hosts.

bash$ cat /etc/hosts               localhost.localdomain localhost

As it happens, transposing the bytes of, we get, which translates in hex to 007f0100, the exact equivalent of what hostid returns, above. There exist only a few million other Linux machines with this identical hostid.


Invoking sar (System Activity Reporter) gives a very detailed rundown on system statistics. The Santa Cruz Operation ("Old" SCO) released sar as Open Source in June, 1999.

This command is not part of the base Linux distribution, but may be obtained as part of the sysstat utilities package, written by Sebastien Godard.

bash$ sar
 Linux 2.4.9 ( 	09/26/03
 10:30:00          CPU     %user     %nice   %system   %iowait     %idle
 10:40:00          all      2.21     10.90     65.48      0.00     21.41
 10:50:00          all      3.36      0.00     72.36      0.00     24.28
 11:00:00          all      1.12      0.00     80.77      0.00     18.11
 Average:          all      2.23      3.63     72.87      0.00     21.27
 14:32:30          LINUX RESTART
 15:00:00          CPU     %user     %nice   %system   %iowait     %idle
 15:10:00          all      8.59      2.40     17.47      0.00     71.54
 15:20:00          all      4.07      1.00     11.95      0.00     82.98
 15:30:00          all      0.79      2.94      7.56      0.00     88.71
 Average:          all      6.33      1.70     14.71      0.00     77.26

Show information and statistics about a designated elf binary. This is part of the binutils package.

bash$ readelf -h /bin/bash
 ELF Header:
    Magic:   7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 
    Class:                             ELF32
    Data:                              2's complement, little endian
    Version:                           1 (current)
    OS/ABI:                            UNIX - System V
    ABI Version:                       0
    Type:                              EXEC (Executable file)
    . . .

The size [/path/to/binary] command gives the segment sizes of a binary executable or archive file. This is mainly of use to programmers.

bash$ size /bin/bash
    text    data     bss     dec     hex filename
   495971   22496   17392  535859   82d33 /bin/bash

System Logs


Appends a user-generated message to the system log (/var/log/messages). You do not have to be root to invoke logger.
logger Experiencing instability in network connection at 23:10, 05/21.
 # Now, do a 'tail /var/log/messages'.

By embedding a logger command in a script, it is possible to write debugging information to /var/log/messages.
logger -t $0 -i Logging at line "$LINENO".
 # The "-t" option specifies the tag for the logger entry.
 # The "-i" option records the process ID.
 # tail /var/log/message
 # ...
 # Jul  7 20:48:58 localhost ./[1712]: Logging at line 3.


This utility manages the system log files, rotating, compressing, deleting, and/or e-mailing them, as appropriate. This keeps the /var/log from getting cluttered with old log files. Usually cron runs logrotate on a daily basis.

Adding an appropriate entry to /etc/logrotate.conf makes it possible to manage personal log files, as well as system-wide ones.


Stefano Falsetto has created rottlog, which he considers to be an improved version of logrotate.

Job Control


Process Statistics: lists currently executing processes by owner and PID (process ID). This is usually invoked with ax options, and may be piped to grep or sed to search for a specific process (see Example 11-12 and Example 27-2).

bash$  ps ax | grep sendmail
 295 ?	   S	  0:00 sendmail: accepting connections on port 25

To display system processes in graphical "tree" format: ps afjx or ps ax --forest.


Lists currently executing processes in "tree" format. The -p option shows the PIDs, as well as the process names.


Continuously updated display of most cpu-intensive processes. The -b option displays in text mode, so that the output may be parsed or accessed from a script.

bash$ top -b
   8:30pm  up 3 min,  3 users,  load average: 0.49, 0.32, 0.13
  45 processes: 44 sleeping, 1 running, 0 zombie, 0 stopped
  CPU states: 13.6% user,  7.3% system,  0.0% nice, 78.9% idle
  Mem:    78396K av,   65468K used,   12928K free,       0K shrd,    2352K buff
  Swap:  157208K av,       0K used,  157208K free                   37244K cached
    848 bozo      17   0   996  996   800 R     5.6  1.2   0:00 top
      1 root       8   0   512  512   444 S     0.0  0.6   0:04 init
      2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd


Run a background job with an altered priority. Priorities run from 19 (lowest) to -20 (highest). Only root may set the negative (higher) priorities. Related commands are renice, snice, and skill.


Keeps a command running even after user logs off. The command will run as a foreground process unless followed by &. If you use nohup within a script, consider coupling it with a wait to avoid creating an orphan or zombie process.


Identifies process ID (PID) of a running job. Since job control commands, such as kill and renice act on the PID of a process (not its name), it is sometimes necessary to identify that PID. The pidof command is the approximate counterpart to the $PPID internal variable.

bash$ pidof xclock

Example 13-6. pidof helps kill a process

 process=xxxyyyzzz  # Use nonexistent process.
 # For demo purposes only...
 # ... don't want to actually kill any actual process with this script.
 # If, for example, you wanted to use this script to logoff the Internet,
 #     process=pppd
 t=`pidof $process`       # Find pid (process id) of $process.
 # The pid is needed by 'kill' (can't 'kill' by program name).
 if [ -z "$t" ]           # If process not present, 'pidof' returns null.
   echo "Process $process was not running."
   echo "Nothing killed."
   exit $NOPROCESS
 kill $t                  # May need 'kill -9' for stubborn process.
 # Need a check here to see if process allowed itself to be killed.
 # Perhaps another " t=`pidof $process` " or ...
 # This entire script could be replaced by
 #    kill $(pidof -x process_name)
 # but it would not be as instructive.
 exit 0

Identifies the processes (by PID) that are accessing a given file, set of files, or directory. May also be invoked with the -k option, which kills those processes. This has interesting implications for system security, especially in scripts preventing unauthorized users from accessing system services.

bash$ fuser -u /usr/bin/vim
 /usr/bin/vim:         3207e(bozo)
 bash$ fuser -u /dev/null
 /dev/null:            3009(bozo)  3010(bozo)  3197(bozo)  3199(bozo)

One important application for fuser is when physically inserting or removing storage media, such as CD ROM disks or USB flash drives. Sometimes trying a umount fails with a device is busy error message. This means that some user(s) and/or process(es) are accessing the device. An fuser -um /dev/device_name will clear up the mystery, so you can kill any relevant processes.

bash$ umount /mnt/usbdrive
 umount: /mnt/usbdrive: device is busy
 bash$ fuser -um /dev/usbdrive
 /mnt/usbdrive:        1772c(bozo)
 bash$ kill -9 1772
 bash$ umount /mnt/usbdrive

The fuser command, invoked with the -n option identifies the processes accessing a port. This is especially useful in combination with nmap.

root# nmap localhost.localdomain
  25/tcp   open  smtp
 root# fuser -un tcp 25
 25/tcp:               2095(root)
 root# ps ax | grep 2095 | grep -v grep
 2095 ?        Ss     0:00 sendmail: accepting connections


Administrative program scheduler, performing such duties as cleaning up and deleting system log files and updating the slocate database. This is the superuser version of at (although each user may have their own crontab file which can be changed with the crontab command). It runs as a daemon and executes scheduled entries from /etc/crontab.


Some flavors of Linux run crond, Matthew Dillon's version of cron.

Process Control and Booting


The init command is the parent of all processes. Called in the final step of a bootup, init determines the runlevel of the system from /etc/inittab. Invoked by its alias telinit, and by root only.


Symlinked to init, this is a means of changing the system runlevel, usually done for system maintenance or emergency filesystem repairs. Invoked only by root. This command can be dangerous - be certain you understand it well before using!


Shows the current and last runlevel, that is, whether the system is halted (runlevel 0), in single-user mode (1), in multi-user mode (2 or 3), in X Windows (5), or rebooting (6). This command accesses the /var/run/utmp file.

halt, shutdown, reboot

Command set to shut the system down, usually just prior to a power down.



Network interface configuration and tuning utility.

bash$ ifconfig -a
 lo        Link encap:Local Loopback
            inet addr:  Mask:
            UP LOOPBACK RUNNING  MTU:16436  Metric:1
            RX packets:10 errors:0 dropped:0 overruns:0 frame:0
            TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:0 
            RX bytes:700 (700.0 b)  TX bytes:700 (700.0 b)

The ifconfig command is most often used at bootup to set up the interfaces, or to shut them down when rebooting.

# Code snippets from /etc/rc.d/init.d/network
 # ...
 # Check that networking is up.
 [ ${NETWORKING} = "no" ] && exit 0
 [ -x /sbin/ifconfig ] || exit 0
 # ...
 for i in $interfaces ; do
   if ifconfig $i 2>/dev/null | grep -q "UP" >/dev/null 2>&1 ; then
     action "Shutting down interface $i: " ./ifdown $i boot
 # The GNU-specific "-q" option to "grep" means "quiet", i.e., producing no output.
 # Redirecting output to /dev/null is therefore not strictly necessary.
 # ...
 echo "Currently active devices:"
 echo `/sbin/ifconfig | grep ^[a-z] | awk '{print $1}'`
 #                            ^^^^^  should be quoted to prevent globbing.
 #  The following also work.
 #    echo $(/sbin/ifconfig | awk '/^[a-z]/ { print $1 })'
 #    echo $(/sbin/ifconfig | sed -e 's/ .*//')
 #  Thanks, S.C., for additional comments.

See also Example 29-6.


This is the command set for configuring a wireless network. It is the wireless equivalent of ifconfig, above.


Show info about or make changes to the kernel routing table.

bash$ route
 Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
  pm3-67.bozosisp *      UH       40 0          0 ppp0       *            U        40 0          0 lo
  default         pm3-67.bozosisp         UG       40 0          0 ppp0


Check network configuration. This command lists and manages the network services started at bootup in the /etc/rc?.d directory.

Originally a port from IRIX to Red Hat Linux, chkconfig may not be part of the core installation of some Linux flavors.

bash$ chkconfig --list
 atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off
  rwhod           0:off   1:off   2:off   3:off   4:off   5:off   6:off


Network packet "sniffer". This is a tool for analyzing and troubleshooting traffic on a network by dumping packet headers that match specified criteria.

Dump ip packet traffic between hosts bozoville and caduceus:
bash$ tcpdump ip host bozoville and caduceus

Of course, the output of tcpdump can be parsed, using certain of the previously discussed text processing utilities.



Mount a filesystem, usually on an external device, such as a floppy or CDROM. The file /etc/fstab provides a handy listing of available filesystems, partitions, and devices, including options, that may be automatically or manually mounted. The file /etc/mtab shows the currently mounted filesystems and partitions (including the virtual ones, such as /proc).

mount -a mounts all filesystems and partitions listed in /etc/fstab, except those with a noauto option. At bootup, a startup script in /etc/rc.d (rc.sysinit or something similar) invokes this to get everything mounted.

mount -t iso9660 /dev/cdrom /mnt/cdrom
 # Mounts CDROM
 mount /mnt/cdrom
 # Shortcut, if /mnt/cdrom listed in /etc/fstab

This versatile command can even mount an ordinary file on a block device, and the file will act as if it were a filesystem. Mount accomplishes that by associating the file with a loopback device. One application of this is to mount and examine an ISO9660 image before burning it onto a CDR. [3]

Example 13-7. Checking a CD image

# As root...
 mkdir /mnt/cdtest  # Prepare a mount point, if not already there.
 mount -r -t iso9660 -o loop cd-image.iso /mnt/cdtest   # Mount the image.
 #                  "-o loop" option equivalent to "losetup /dev/loop0"
 cd /mnt/cdtest     # Now, check the image.
 ls -alR            # List the files in the directory tree there.
                    # And so forth.

Unmount a currently mounted filesystem. Before physically removing a previously mounted floppy or CDROM disk, the device must be umounted, else filesystem corruption may result.
umount /mnt/cdrom
 # You may now press the eject button and safely remove the disk.


The automount utility, if properly installed, can mount and unmount floppies or CDROM disks as they are accessed or removed. On laptops with swappable floppy and CDROM drives, this can cause problems, though.


Forces an immediate write of all updated data from buffers to hard drive (synchronize drive with buffers). While not strictly necessary, a sync assures the sys admin or user that the data just changed will survive a sudden power failure. In the olden days, a sync; sync (twice, just to make absolutely sure) was a useful precautionary measure before a system reboot.

At times, you may wish to force an immediate buffer flush, as when securely deleting a file (see Example 12-54) or when the lights begin to flicker.


Sets up and configures loopback devices.

Example 13-8. Creating a filesystem in a file

SIZE=1000000  # 1 meg
 head -c $SIZE < /dev/zero > file  # Set up file of designated size.
 losetup /dev/loop0 file           # Set it up as loopback device.
 mke2fs /dev/loop0                 # Create filesystem.
 mount -o loop /dev/loop0 /mnt     # Mount it.
 # Thanks, S.C.

Creates a swap partition or file. The swap area must subsequently be enabled with swapon.

swapon, swapoff

Enable / disable swap partitition or file. These commands usually take effect at bootup and shutdown.


Create a Linux ext2 filesystem. This command must be invoked as root.

Example 13-9. Adding a new hard drive

 # Adding a second hard drive to system.
 # Software configuration. Assumes hardware already mounted.
 # From an article by the author of this document.
 # In issue #38 of "Linux Gazette",
 ROOT_UID=0     # This script must be run as root.
 E_NOTROOT=67   # Non-root exit error.
 if [ "$UID" -ne "$ROOT_UID" ]
   echo "Must be root to run this script."
   exit $E_NOTROOT
 # Use with extreme caution!
 # If something goes wrong, you may wipe out your current filesystem.
 NEWDISK=/dev/hdb         # Assumes /dev/hdb vacant. Check!
 MOUNTPOINT=/mnt/newdisk  # Or choose another mount point.
 fdisk $NEWDISK
 mke2fs -cv $NEWDISK1   # Check for bad blocks & verbose output.
 #  Note:    /dev/hdb1, *not* /dev/hdb!
 chmod 777 $MOUNTPOINT  # Makes new drive accessible to all users.
 # Now, test...
 # mount -t ext2 /dev/hdb1 /mnt/newdisk
 # Try creating a directory.
 # If it works, umount it, and proceed.
 # Final step:
 # Add the following line to /etc/fstab.
 # /dev/hdb1  /mnt/newdisk  ext2  defaults  1 1
 exit 0

See also Example 13-8 and Example 28-3.


Tune ext2 filesystem. May be used to change filesystem parameters, such as maximum mount count. This must be invoked as root.


This is an extremely dangerous command. Use it at your own risk, as you may inadvertently destroy your filesystem.


Dump (list to stdout) very verbose filesystem info. This must be invoked as root.

root# dumpe2fs /dev/hda7 | grep 'ount count'
 dumpe2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09
  Mount count:              6
  Maximum mount count:      20

List or change hard disk parameters. This command must be invoked as root, and it may be dangerous if misused.


Create or change a partition table on a storage device, usually a hard drive. This command must be invoked as root.


Use this command with extreme caution. If something goes wrong, you may destroy an existing filesystem.

fsck, e2fsck, debugfs

Filesystem check, repair, and debug command set.

fsck: a front end for checking a UNIX filesystem (may invoke other utilities). The actual filesystem type generally defaults to ext2.

e2fsck: ext2 filesystem checker.

debugfs: ext2 filesystem debugger. One of the uses of this versatile, but dangerous command is to (attempt to) recover deleted files. For advanced users only!


All of these should be invoked as root, and they can damage or destroy a filesystem if misused.


Checks for bad blocks (physical media flaws) on a storage device. This command finds use when formatting a newly installed hard drive or testing the integrity of backup media. [4] As an example, badblocks /dev/fd0 tests a floppy disk.

The badblocks command may be invoked destructively (overwrite all data) or in non-destructive read-only mode. If root user owns the device to be tested, as is generally the case, then root must invoke this command.

lsusb, usbmodules

The lsusb command lists all USB (Universal Serial Bus) buses and the devices hooked up to them.

The usbmodules command outputs information about the driver modules for connected USB devices.

root# lsusb
 Bus 001 Device 001: ID 0000:0000  
  Device Descriptor:
    bLength                18
    bDescriptorType         1
    bcdUSB               1.00
    bDeviceClass            9 Hub
    bDeviceSubClass         0 
    bDeviceProtocol         0 
    bMaxPacketSize0         8
    idVendor           0x0000 
    idProduct          0x0000
    . . .


Creates a boot floppy which can be used to bring up the system if, for example, the MBR (master boot record) becomes corrupted. The mkbootdisk command is actually a Bash script, written by Erik Troan, in the /sbin directory.


CHange ROOT directory. Normally commands are fetched from $PATH, relative to /, the default root directory. This changes the root directory to a different one (and also changes the working directory to there). This is useful for security purposes, for instance when the system administrator wishes to restrict certain users, such as those telnetting in, to a secured portion of the filesystem (this is sometimes referred to as confining a guest user to a "chroot jail"). Note that after a chroot, the execution path for system binaries is no longer valid.

A chroot /opt would cause references to /usr/bin to be translated to /opt/usr/bin. Likewise, chroot /aaa/bbb /bin/ls would redirect future instances of ls to /aaa/bbb as the base directory, rather than / as is normally the case. An alias XX 'chroot /aaa/bbb ls' in a user's ~/.bashrc effectively restricts which portion of the filesystem she may run command "XX" on.

The chroot command is also handy when running from an emergency boot floppy (chroot to /dev/fd0), or as an option to lilo when recovering from a system crash. Other uses include installation from a different filesystem (an rpm option) or running a readonly filesystem from a CD ROM. Invoke only as root, and use with care.


It might be necessary to copy certain system files to a chrooted directory, since the normal $PATH can no longer be relied upon.


This utility is part of the procmail package ( It creates a lock file, a semaphore file that controls access to a file, device, or resource. The lock file serves as a flag that this particular file, device, or resource is in use by a particular process ("busy"), and this permits only restricted access (or no access) to other processes.

Lock files are used in such applications as protecting system mail folders from simultaneously being changed by multiple users, indicating that a modem port is being accessed, and showing that an instance of Netscape is using its cache. Scripts may check for the existence of a lock file created by a certain process to check if that process is running. Note that if a script attempts to create a lock file that already exists, the script will likely hang.

Normally, applications create and check for lock files in the /var/lock directory. A script can test for the presence of a lock file by something like the following.
 # Application "xyzip" created lock file "/var/lock/xyzip.lock".
 if [ -e "/var/lock/$appname.lock" ]


Creates block or character device files (may be necessary when installing new hardware on the system). The MAKEDEV utility has virtually all of the functionality of mknod, and is easier to use.


Utility for creating device files. It must be run as root, and in the /dev directory.
root# ./MAKEDEV
This is a sort of advanced version of mknod.


Automatically deletes files which have not been accessed within a specified period of time. Usually invoked by cron to remove stale log files.


dump, restore

The dump command is an elaborate filesystem backup utility, generally used on larger installations and networks. [5] It reads raw disk partitions and writes a backup file in a binary format. Files to be backed up may be saved to a variety of storage media, including disks and tape drives. The restore command restores backups made with dump.


Perform a low-level format on a floppy disk.

System Resources


Sets an upper limit on use of system resources. Usually invoked with the -f option, which sets a limit on file size (ulimit -f 1000 limits files to 1 meg maximum). The -t option limits the coredump size (ulimit -c 0 eliminates coredumps). Normally, the value of ulimit would be set in /etc/profile and/or ~/.bash_profile (see Appendix G).


Judicious use of ulimit can protect a system against the dreaded fork bomb.

 # This script is for illustrative purposes only.
 # Run it at your own peril -- it *will* freeze your system.
 while true  #  Endless loop.
   $0 &      #  This script invokes itself . . .
             #+ forks an infinite number of times . . .
             #+ until the system freezes up because all resources exhausted.
 done        #  This is the notorious "sorcerer's appentice" scenario.	   
 exit 0      #  Will not exit here, because this script will never terminate.

A ulimit -Hu XX (where XX is the user process limit) in /etc/profile would abort this script when it exceeds the preset limit.


Display user or group disk quotas.


Set user or group disk quotas from the command line.


User file creation permissions mask. Limit the default file attributes for a particular user. All files created by that user take on the attributes specified by umask. The (octal) value passed to umask defines the file permissions disabled. For example, umask 022 ensures that new files will have at most 755 permissions (777 NAND 022). [6] Of course, the user may later change the attributes of particular files with chmod. The usual practice is to set the value of umask in /etc/profile and/or ~/.bash_profile (see Appendix G).

Example 13-10. Using umask to hide an output file from prying eyes

 # Same as "" script, but writes output to "secure" file.
 # Usage: ./ filename
 # or     ./ <filename
 # or     ./ and supply keyboard input (stdin)
 umask 177               #  File creation mask.
                         #  Files created by this script
                         #+ will have 600 permissions.
 OUTFILE=decrypted.txt   #  Results output to file "decrypted.txt"
                         #+ which can only be read/written
                         #  by invoker of script (or root).
 cat "$@" | tr 'a-zA-Z' 'n-za-mN-ZA-M' > $OUTFILE 
 #    ^^ Input from stdin or a file.   ^^^^^^^^^^ Output redirected to file. 
 exit 0

Get info about or make changes to root device, swap space, or video mode. The functionality of rdev has generally been taken over by lilo, but rdev remains useful for setting up a ram disk. This is a dangerous command, if misused.



List installed kernel modules.

bash$ lsmod
 Module                  Size  Used by
  autofs                  9456   2 (autoclean)
  opl3                   11376   0
  serial_cs               5456   0 (unused)
  sb                     34752   0
  uart401                 6384   0 [sb]
  sound                  58368   0 [opl3 sb uart401]
  soundlow                 464   0 [sound]
  soundcore               2800   6 [sb sound]
  ds                      6448   2 [serial_cs]
  i82365                 22928   2
  pcmcia_core            45984   0 [serial_cs ds i82365]


Doing a cat /proc/modules gives the same information.


Force installation of a kernel module (use modprobe instead, when possible). Must be invoked as root.


Force unloading of a kernel module. Must be invoked as root.


Module loader that is normally invoked automatically in a startup script. Must be invoked as root.


Creates module dependency file, usually invoked from startup script.


Output information about a loadable module.

bash$ modinfo hid
 filename:    /lib/modules/2.4.20-6/kernel/drivers/usb/hid.o
  description: "USB HID support drivers"
  author:      "Andreas Gal, Vojtech Pavlik <>"
  license:     "GPL"



Runs a program or script with certain environmental variables set or changed (without changing the overall system environment). The [varname=xxx] permits changing the environmental variable varname for the duration of the script. With no options specified, this command lists all the environmental variable settings.


In Bash and other Bourne shell derivatives, it is possible to set variables in a single command's environment.
var1=value1 var2=value2 commandXXX
 # $var1 and $var2 set in the environment of 'commandXXX' only.


The first line of a script (the "sha-bang" line) may use env when the path to the shell or interpreter is unknown.
#! /usr/bin/env perl
 print "This Perl script will run,\n";
 print "even when I don't know where to find Perl.\n";
 # Good for portable cross-platform scripts,
 # where the Perl binaries may not be in the expected place.
 # Thanks, S.C.


Show shared lib dependencies for an executable file.

bash$ ldd /bin/ls => /lib/ (0x4000c000)
 /lib/ => /lib/ (0x80000000)

Run a command repeatedly, at specified time intervals.

The default is two-second intervals, but this may be changed with the -n option.

watch -n 5 tail /var/log/messages
 # Shows tail end of system log, /var/log/messages, every five seconds.


Remove the debugging symbolic references from an executable binary. This decreases its size, but makes debugging it impossible.

This command often occurs in a Makefile, but rarely in a shell script.


List symbols in an unstripped compiled binary.


Remote distribution client: synchronizes, clones, or backs up a file system on a remote server.



This is the case on a Linux machine or a UNIX system with disk quotas.


The userdel command will fail if the particular user being deleted is still logged on.


For more detail on burning CDRs, see Alex Withers' article, Creating CDs, in the October, 1999 issue of Linux Journal.


The -c option to mke2fs also invokes a check for bad blocks.


Operators of single-user Linux systems generally prefer something simpler for backups, such as tar.


NAND is the logical not-and operator. Its effect is somewhat similar to subtraction.

Оставьте свой комментарий !

Ваше имя:
Оба поля являются обязательными

 Автор  Комментарий к данной статье