extensions, if the path has the "ucb" directory first. "Ucb," by the way, stands for the Univerity of California
at Berkeley.
I hope I've given you some insight on how to use the expr command. Many people only know about the
numeric calculation features. It isn't as powerful as sed or awk, but if it can do the job, you may get a
performance gain because you are using an built-in function instead of an external program.
Bourne Shell -- Functions and argument checking
The original version of the Bourne shell didn't have functions. If you wanted to perform an operation more
than once, you either had to duplicate the code, or create a new shell script. There is a slight penalty for each
script called, as another process must be created. You also have to know where the script is, if it isn't in the
path.
The C shell has aliases, but these are limited to one line, and parsing arguments is extremely confusing. The
Bourne shell solved this problem with the concept of functions. Here is an example that counts from 1 to 10,
incrementing variable A in a function:
#!/bin/sh
inc_A() {
# Increment A by 1
A=`expr $A + 1`
}
A=1
while [ $A -le 10 ]
do
echo $A
inc_A
done
If you wish, you can define the same function on one line:
inc_A() { A=`expr $A + 1`; }
Note that you need to add a semicolon before the curley brace, as the shell expects a list between the curley
braces. Another way to remember this is "}" must be the first command on a line.
Passing an argument to a function is simple: it is identical to the mechanism used to pass an argument to a
shell script. All you have to do is remember that the positional arguments are relative to the function, not the
script. That is, if you have the following script:
#!/bin/sh
func() {echo $1;}
func $2
and you call it with two arguments, the script prints the second argument, because that is the first argument in
the function.
Passing values by name
You can pass names of variables to functions. However, this adds a lot of complexity. You must bypass the
normal shell evaluation of variables. Also, strings like "$$" have special meanings. Here is a function that
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
58 of 66
11/21/2011 12:03 PM
increments a specified variable
#!/bin/sh
inc() { eval $1=`expr $$1 + 1`; }
A=10
inc A
# variable A now has the value of 11
The eval command operates on the string
a=`expr $a + 1`
and the backslashes were added to prevent the shell from interpreting the backquotes and dollar sign too
soon.
Exiting from a function
What happens if you execute an exit command inside a function? The same as if you executed it from
anywhere else from a script. It aborts the script, and passes the value to the calling script.
Suppose you want to return a value from a function? If it's a variable, just put the value in a variable. But
that's not what I'm talking about. I've shown you how to use the exit status in commands like if and while.
What happens if you create a function and use it in a if statement? You have two choices. Normally, the
function returns with the exit status of the last command. If you want to control explicitly the value, the
Bourne shell has a special command called return that sets the status value to the value specified. If no value
is specified, the status of the last command is used.
You can write a simple script to loop forever:
always_true() { return 0;}
while always_true
do
echo "I just can't control myself."
echo 'Someone stop me, please!'
done
Remember that the exit status of zero is a true condition in shell programming. Now that I've described
functions, I'll show you how to use them to parse arguments in a shell script.
Checking the number of arguments
Let's say you have a shell script with three arguments. There are many ways to solve this problem. I'll try to
provide a sampler of mechanisms, so you can use the one right for your application. One way to make sure
your script will work without the right number of arguments is to use default values for the variables. This
example moves a file from one directory to another. If you don't specify any arguments, it moves file "a.out"
from the current directory to your home directory:
#!/bin/sh -x
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
59 of 66
11/21/2011 12:03 PM
arg1=${1:-a.out}
arg2=${2:-`pwd`}
arg3=${3:-$HOME}
mv $arg2/$arg1 $arg3
Short, and simple. Not many people use this form, and it can seem daunting to the beginning shell
programmer. It is handy at times. If you want to demand that the first argument be specified, use the form that
reports an error if an argument is missing:
#!/bin/sh
file_to_be_moved="$1"
arg1=${file_to_be_moved:?"filename missing"}
arg2=${2:-`pwd`}
arg3=${3:-$HOME}
mv $arg2/$arg1 $arg3
Notice how I added another variable with a long name. Without it, I'd get the following error:
1: filename missing
With the extra variable, I now get:
file_to_be_moved: filename missing
This is better, but perhaps is a little obscure. Perhaps a better mechanism is:
#!/bin/sh
arg1="a.out"
arg2=`pwd`
arg3=$HOME
if [ $# -gt 3 ]
then
:
fi
if [ $# -eq 0 ]
then
echo must specify at least one argument
exit 1
else if [ $# -eq 1 ]
then
arg1="$1";
else if [ $# -eq 2 ]
then
arg1="$1";
arg2="$2";
else if [ $# -eq 3 ]
then
arg1="$1";
arg2="$2";
arg3="$3";
else
echo too many arguments
exit 1
fi
mv $arg2/$arg1 $arg3
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
60 of 66
11/21/2011 12:03 PM
Click here to get file:
ShCmdChk1.sh
or perhaps:
#!/bin/sh
arg1="a.out"
arg2=`pwd`
arg3=$HOME
if [ $# -gt 3 ]
then
echo too many arguments
exit 1
fi
if [ $# -eq 0 ]
then
echo must specify at least one argument
exit 1
fi
[ $# -ge 1 ] && arg1=$1 # do this if 1, 2 or 3 arguments
[ $# -ge 2 ] && arg2=$2 # do this if 2 or 3 arguments
[ $# -ge 3 ] && arg3=$3 # do this if 3 arguments
mv $arg2/$arg1 $arg3
Click here to get file:
ShCmdChk2.sh
Another way to solve the problem is to use the shift command. I'll add a function this time:
#!/bin/sh
usage() {
echo `basename $0`: ERROR: $* 1>&2
echo usage: `basename $0` 'filename [fromdir] [todir]' 1>&2
exit 1
}
arg1="a.out"
arg2=`pwd`
arg3=$HOME
[ $# -gt 3 -o $# -lt 1 ] && usage "Wrong number of arguments"
arg1=$1;shift
[ $# -gt 0 ] && { arg2=$1;shift;}
[ $# -gt 0 ] && { arg3=$1;shift;}
mv $arg2/$arg1 $arg3
Click here to get file:
ShCmdChk3.sh
Here is still another variation using the case command:
#!/bin/sh
usage() {
echo `basename $0`: ERROR: $* 1>&2
echo usage: `basename $0` 'filename [fromdir] [todir]' 1>&2
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
61 of 66
11/21/2011 12:03 PM
exit 1
}
arg1="a.out"
arg2=`pwd`
arg3=$HOME
case $# in
0) usage "must provide at least one argument";;
1) arg1=$1;;
2) arg1=$1;arg2=$2;;
3) arg1=$1;arg2=$2;arg3=$3;;
*) usage "too many arguments";;
esac
mv $arg2/$arg1 $arg3
Click here to get file:
ShCmdChk4.sh
As you can see, there are many variations. Notice the use of "basename $0" in the usage function. It is very
important to put the name of the script in an error message. It can be very frustrating to get an error report,
but have no idea where the error is. When scripts call other scripts in a chaotic fashion, finding the culprit can
be time-consuming. I also like putting the usage function in the beginning of a script, to help someone quickly
learn how to use the script by examing the code. Also note the "1>&2" when reporting an error. This forces
the output to go to standard error.
UNIX conventions for command line arguments
There are standards for command-line arguments in the UNIX world. Some commands, like tar, find, and dd
do not follow those conventions. I could explain why they don't follow the convention, but it's ancient history.
One must deal with life's disappointments, grasshopper.
If possible, you should go along with the conventions. Most shell scripts do not follow all of the conventions.
Perhaps a refresher course is a good idea. Still here? I'll be brief. Optional arguments always start with a
hyphen, and are a single letter or number. If an option has a value following it, the space is optional. For
example, if the "o" variable takes a value, the following two examples should do the same thing:
program -ofile
program -o file
Options that do not take arguments can be combined with a hyphen. Order doesn't matter. Therefore the
following examples should be equivalent:
program -a -b -c
program -c -b -a
program -ab -c
program -cb -a
program -cba
program -abc
If an option is provided that doesn't match the list of suggested options, an error should occur. Another
convention that is popular is the hyphen-hyphen convention. If you want to pass a filename to a script that
starts with a hyphen, use a hyphen-hyphen before it:
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
62 of 66
11/21/2011 12:03 PM
program -- -a
Checking for optional arguments
Let's assume we need a script that uses the letters a through c as single letter options, and "-o" as an option
that requires a value. Let's also assume there are any number of arguments after the options. A simple
solution might be:
#!/bin/sh
usage() {
echo `basename $0`: ERROR: $* 1>&2
echo usage: `basename $0` '[-a] [-b] [-c] [-o file]
[file ...]' 1>&2
exit 1
}
a= b= c= o=
while :
do
case "$1" in
-a) a=1;;
-b) b=1;;
-c) c=1;;
-o) shift; o="$1";;
--) shift; break;;
-*) usage "bad argument $1";;
*) break;;
esac
shift
done
# rest of script...
Click here to get file:
ShCmdArgs3.sh
This script does not allow you to combine several single-letter options with one hyphen. Also, a space is
mandatory after the "-o" option. These options can be fixed, but at a cost. The code becomes much more
complicated. It does work, however:
#!/bin/sh
usage() {
echo `basename $0`: ERROR: $* 1>&2
echo usage: `basename $0` '[-[abc]] [-o file]' '[file ...]' 1>&2
exit 1
}
inside() {
# this function returns a TRUE if $2 is inside $1
# I'll use a case statement, because this is a built-in of the shell,
# and faster.
# I could use grep:
# echo $1 | grep -s "$2" >/dev/null
# or this
# echo $1 | grep -qs "$2"
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
63 of 66
11/21/2011 12:03 PM
# or expr:
# expr "$1" : ".*$2" >/dev/null && return 0 # true
# but case does not require another shell process
case "$1" in
*$2*) return 0;;
esac
return 1;
}
done_options=
more_options() {
# return true(0) if there are options left to parse
# otherwise, return false
# check the 'short-circuit' flag
test $done_options && return 1
# true
# how many arguments are left?
[ $# -eq 0 ] && return 0
# does the next argument start with a hyphen
inside "$1" '-' && return 0;
# otherwise, return false
return 1
# false
}
a= b= c= o=
while more_options "$1"
do
case "$1" in
--) done_options=1;;
-[abc]*)
inside "$1" a && a=1;
inside "$1" b && b=1;
inside "$1" c && c=1;
inside "$1" "[d-zA-Z]" &&
usage "unknown option in $1";
;;
-o?*)
# have to extract string from argument
o=`expr "$1" : '-o(.*)'`
;;
-o)
[ $# -gt 1 ] || usage "-o requires a value";
shift
o="$1";;
-*) usage "unknown option $1";;
esac
shift
# each time around, pop off the option
done
# continue with script
# ...
Click here to get file:
ShCmdArgs4.sh
One of my earlier versions of this script used expr instead of the case statement. This provided a smaller,
shorter script, but the expr command is an external command, and the script took longer to execute. This
version is more efficient, but more complicated. The question you have to ask, is it worth it? There is another
option. Many UNIX systems have the getopt program. This allows you to handle command-line options with
the standard conventions. Here is how to use it:
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
64 of 66
11/21/2011 12:03 PM
#!/bin/sh
usage() {
echo `basename $0`: ERROR: $* 1>&2
echo usage: `basename $0` '[-[abc]] [-o file]' '[file ...]' 1>&2
exit 1
}
set -- `getopt "abco:" "$@"` || usage
a= b= c= o=
while :
do
case "$1" in
-a) a=1;;
-b) b=1;;
-c) c=1;;
-o) shift; o="$1";;
--) break;;
esac
shift
done
shift # get rid of --
# rest of script...
Click here to get file:
ShCmdArgs.sh
getopt command does much of the work. It automatically reports illegal arguments. The first argument
specifies the legal arguments, and any letter with a colon after it requires a value. You should note that it
always places a double-hyphen after the options and before the arguments. I used a shift command to get rid
of it. There is one bad side-effect of the getopt command. If you have any null arguments, or arguments with
spaces, the getopt command loses this information. The other options do not.
I've given you a very complete collection of scripts that you can modify to handle your needs. There are
many other ways to solve the problems presented here. There is no single best method to use. Hopefully, you
now know several options, and can decide on the method best for you.
Job Control
I added this section afterwards, because it's so powerful, yet confusing to some people.
The Bourne shell has a very powerful way to control background processes in a script. Here are some
examples.
This script lauches three jobs. If the parent job gets a HUP or TERM signal, it sends it to the child processes.
Therefore if you interrupt the parent process, it will pass this signal to the child processes.
PIDS=
program1 & PIDS="$PIDS $!"
program2 & PIDS="$PIDS $!"
program3 & PIDS="$PIDS $!"
trap "kill -1 15 $PIDS" 1 15
Here's another example that lets you run three processes, but terminate them if 30 seconds elapses.
MYID=$$
PIDS=
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
65 of 66
11/21/2011 12:03 PM
(sleep 30; kill -1 $MYID) &
(sleep 5;echo A) & PIDS="$PIDS $!"
(sleep 10;echo B) & PIDS="$PIDS $!"
(sleep 50;echo C) & PIDS="$PIDS $!"
trap "echo TIMEOUT;kill $PIDS" 1
echo waiting for $PIDS
wait $PIDS
echo everything OK
Here's another example where you run "prog1" and "prog2" in the background, and run "prog3" several times
until either "prog1" or "prog2" terminates.
#!/bin/sh
MYPID=$$
done=0
trap "done=1" USR1
(prog1;echo prog1 done;kill -USR1 $$) & pid1=$!
trap "done=1" USR2
(prog2;echo prog2 done;kill -USR2 $$) & pid2=$!
trap "kill -1 $pid1 $pid2" 1 15
while [ "$done" -eq 0 ]
do
prog3
done
I could have simplified the example above by combining USR1 and USR2 But this way you could modify it to
test for both jobs to be complete instead of just one.
Also note the trap of signal 1, which allows you to terminate the parent, and have it terminate the children.
You don;t want long-running jobs hanging around, especially if you are debugging the script.
You can also use it to pass other signals to the child processes, and have them do something special. They
may wait untul they get a signal, for instance. This way you can start all of the processes at the same time
(nearly).
This document was translated by troff2html v0.21 on September 22, 2001.
Bourne Shell Tutorial
http://www.grymoire.com/Unix/Sh.html
66 of 66
11/21/2011 12:03 PM
Dostları ilə paylaş: |