Be sure to read the introduction to Useful Linux Commands for details on running processes in the background and running two or more commands from a single line.
The official documentation is available from GNU Bash manual - GNU Project - Free Software Foundation where you can view it in a couple of different formats.
It is also worth noting ZSH at this point, see Bash vs Zsh: A comparison of two command line shells (2019 Update) for some details and a comparison. It is worth noting that Apple shipped zsh as the default in macOS Catalina (June 2019). The official documentation for ZSH is at zsh: The Z Shell Manual, which includes an explanation of the startup files at An Introduction to the Z Shell - Startup Files.
It is also worth looking at the following:
This is a handy was to associate a text file with an interpreter. So if the first line of your Shell Script file is this:#!/bin/sh
#!/bin/bash
Thus when you execute the file, it will run using /bin/sh or whatever you chose. You can read Shebang (Unix) - Wikipedia, the free encyclopedia for some details. The list of available shells is in /etc/shells
.
When writing a bash script there are a few options that can help debug an issue or help with execution:
set -x
- if you add this to your script then Bash will output every command execution which can be very helpful to see what is going onset -e
- exit the script immediately a command gives a non-zero return code, the normal behaviour is to carry onset -o pipeline
- this set option will ensure the pipeline return value is that of the last command to exitThere are more, but these are the most useful, in my experience.
Often in a Linux Shell Script you wish to output some text to the console and this is what the echo command does. There are three ways to use it, as follows:
echo "Hello World"
- this will simply output "Hello World"echo -e "Line One\nLine Two"
- this will output "Line One" and then start a new line and output "Line Two", the "\n" is processed as a new lineecho -n "Hello"
- this turns off the default of outputting a newline and hence allows successive echo commands to output on the same lineThere are other times where you want to execute a command and put its output in the text you want to "echo", here are some examples of doing this:echo "Hello from $(whoami)"
- Hello from geoffecho "Running on $(hostname)"
- Running on testserver-vmecho "Today is $(date) which is nice"
- Today is Mon Jun 13 11:00:02 BST 2016 which is niceecho "Today is $(date +"%X on the %x") which is nice"
- Today is 11:00:02 on the 13/06/16 which is nice
As you can see the date command is very flexible. There are other ways to do this, some people use the backtick, however this is a nice clear standard way.
I have noticed that if you don't put double quotes around the text being echo'd then multiple space characters are de-duplicated just, try these:echo Hello World
echo "Hello World"
Often you want to redirect output from a command to a file, whether on its own or as part of a script. A good example command is something like this: find / -name 'java'
, so run it and see what happens. This command will look for a file called "java" across your entire system and you will see you get error messages for directories you don't have the right permissions on. If you want to redirect the output to a file you simply do this find / -name 'java' > output.txt
. You will notice that some output goes into the file and other output to the console. There are two types of output, "regular output" (stdout) and "error output" (stderr) and in this example stdout has gone to the file and stderr to the console. They actually have numbers, stdout is 1 and stderr is 2, so now the following examples should make sense:find / -name 'java'
- all output to the consolefind / -name 'java' > output.txt
- stdout to the file, stderr to the consolefind / -name 'java' &> output.txt
- both stdout and stderr to the filefind / -name 'java' > output.txt 2>&1
- both stdout and stderr to the filefind / -name 'java' > output.txt 2> error.txt
- stdout and stderr sent to different filesfind / -name 'java' 2> /dev/null
- stdout to the console, send stderr to "null" or remove it
Hopefully you now understand this and can see some more options, although possibly less useful ones!
There are times when you need to ask for input with a script and this is when the read
command comes into its own. An example might be when you want to ask a user for their credentials, for example:
#!/bin/bash read -p 'Username: ' i_username read -sp "Password for $i_username: " i_password echo echo "Thanks $i_username, got your credentials..."
Notice that the -p
switch allows you to specify text as a prompt and -s
suppresses the echoing of the input, so is suitable for capturing passwords.
I recently had an issue where I needed to extract some information out of a file but not all of it, so, I used this:cat /etc/oratab | grep :/ | cut -d: -f2 | sort | uniq
So this lists the contents of /etc/oratab with the cat command, passes the output to grep which looks for lines with ":/" in them, these lines are sent to cut which uses a : delimiter and returns the second field of all those lines, the lines are then sorted and only unique ones output, nice!
Usually within a script you need to know if at least some of the commands completed successfully. I would suggest you will need something like this:
sqlplus user/password@database @my_script.sql OUT=$? if [ $OUT -eq 0 ];then echo "Command completed successfully" else echo "ERROR: something went wrong!" fi
This example executes a SQL script with Oracle's SQL*Plus. It is important that the very next line captures the return code ($?) otherwise it may capture another command's return code, if you had an extra "echo" line in there first.
There are some more advanced options, so for example Bash has the internal variable PIPESTATUS which can be used thus:echo ${PIPESTATUS[0]}
echo ${PIPESTATUS[@]}
This is great for seeing exactly what part of the line failed when you have a series of commands in a pipeline, like ps -ef | grep ftp
. It is important to note that you cannot use $? and pipestatus, you need to use one or the other. There is also set -e
and set -o pipefail
which will make the script fail if one command fails.
A simple example of setting and using a variable is as follows:
greeting="Hello World" echo "The greeting [$greeting]"
Sometimes though you want to put the output of a unix command into a variable, this is done like this:
machine=$(uname -m) echo "Machine: $machine"
You can put more complex commands into the brackets and pipe output from one command to another.
If you have a text file and wish to read its contents line by line and process each line in some way then the following example is a good starting point.
while read p; do echo "$p" done < data.txt
Clearly this can be edited and built upon.
Clearly there are countless ways to use an if, however testing if a file exists is a common one, so here's an example:
INPUT_FILE=~/test.txt if [ -f $INPUT_FILE ]; then echo "Found $INPUT_FILE" else echo "ERROR: $INPUT_FILE not found" fi
The "-f" means check for a regular file, you can use "-d" to check if it is a directory and there are more options for other uses. You can also add an exclamation mark after the opening square bracket to do a "not" if you only want to do something if the file/directory is missing.
You can compare strings as follows:if [ "$s1" == "$s2" ]; then
Alternatively you can test if s2 is contained within s1 as follows:if [ "$s1" == *"$s2"* ]; then
A good article on conditional if statements in Bash is Bash If Statements and Scripting - Linux Cheat Sheet | A Cloud Guru which goes into more depth.
The case statement is very handy, here is an example:
#!/bin/bash if test -z "$1"; then # No environment specified, so display help echo "Please specify HOME or WORK as the argument" exit 1 fi case "$1" in HOME|home) echo "Connecting via HOME network" # Do something specific to home ;; WORK|work) echo "Connecting via WORK" # Do something specific to work ;; *) echo "Unexpected argument: $1" echo "Please use HOME or WORK" ;; esac
Which should be self explanatory.
This example works in Bash
#!/bin/bash function sayhello { echo "Hello World" } function helloto { echo "Hello $1" } echo "-- Start" sayhello helloto "Geoff" echo "-- End"
It should be similar in other shells but note the use of parameters within the function, it is like those used when calling the script. It is also worth noting that these function will work after the script has finished if they are defined in the .bash_profile
file.
Sometimes you just want to display the date and or time in a log file or on screen but other times you want the log file itself to contain the date and time.
If you are working with a script and controlling things from there then the following gives some different examples and creates a few files with the "touch" command.
#!/bin/bash echo "Starting...." date echo "Output in locale's datetime format" date +"%c" echo "Output locale's date and then time" date +"%x %X" echo "Currently $(date +"%F_%R")" echo " or $(date +"%F_%T")" DATE_SUFFIX=$(date +"%Y%m%d") echo "" echo "The date is $DATE_SUFFIX" LOG_FILE=output.txt LOG_FILE=$DATE_SUFFIX"_a_"$LOG_FILE # with _ you need the quotes echo "Logging to $LOG_FILE" touch $LOG_FILE LOG_FILE=output.txt LOG_FILE=$(date +"%F_%T")_b_$LOG_FILE echo "Logging to $LOG_FILE" touch $LOG_FILE LOG_FILE=output.txt LOG_FILE=$(date +"%F_%H-%M-%S")_c_$LOG_FILE echo "Logging to $LOG_FILE" touch $LOG_FILE echo "Done."
Other times you have a simple cron job that needs to log to a file with a date in it, in which case something like this should work:/opt/batch.sh &> /var/log/cronjobs/batch_$(date +"%F_%H_%M_%S").log
However it should be noted that cron will need the percent symbol "escaping", so your crontab entry should be like this:/opt/batch.sh &> /var/log/cronjobs/batch_$(date +"\%F_\%H_\%M_\%S").log
The date formatting is flexible, so look at date(1): print/set system date/time - Linux man page for more options. One option I like is
echo "Currently $(date +"%T.%N")"
This will print something like "Currently 21:12:03.591756490" which is the current time complete with nanoseconds.
I have used something similar to this to do multiple pieces of work at the same time. Internally each of those child scripts is logging to its own log file but the first part of each line logged it the time and then these are all sorted together and merged into output.log.
#!/bin/bash echo "Deleting old output files..." rm *.log echo "Launching child scripts..." ./testing_green.sh & PID1=$! ./testing_white.sh & PID2=$! ./testing_black.sh & PID3=$! echo "Waiting for: $PID1 $PID2 $PID3" wait $PID1 $PID2 $PID3 echo "Combining log files" sort green.log white.log black.log > output.log
It is worth noting that $! gets the process id of the previous command, these get stored so that the wait command will pause this script until all the child scripts have completed.
It is important to note that when these scripts are launched they are started in their own shell, so this means any environment variable setting, for example, does not come back to the calling script. There are two ways to run another script in the current shell:source ./test_green.sh [arguments]
. ./test_green.sh [arguments]
Just to be clear the dot is a synonym for the source command, which is itself a built-in command.