Finding good, basic examples to help with using Linux commands can be tricky and sometimes you need to know the command before you can find out what it can do. So here I am trying to give folks a head start. One you know the command than you can type man ls
for example and get some help. However sometimes the documentation is comprehensive but lacking good examples, so have a look at TLDR pages which can be installed to run on the command line or used online. Another helpful, online site is cheat.sh/:firstpage which can also be used with curl.
Sometimes you want to run two commands with a single line input, especially if they take some time to execute. This can be done by simply putting && between them, for example:sudo apt-get update && sudo apt-get upgrade
There are also occasions where you want to run the command in the background, for example you want to start a program with a GUI from a terminal, this can be done by placing an ampersand at the end of the line, which makes the command run in the background so you can continue using the terminal session, although it will still output to that terminal window:uex ./file.txt &
If your command generates a lot of output you might find it easier to redirect it to a file, like this cmd > output_file.txt
however you might notice this does not actually redirect everything. In fact it only redirects stdout not stderr, to redirect both to file use this cmd &> output_file.txt
, see Linux Shell Scripts for further explanation.
Putting some of the commands listed here together gives some powerful command lines, see Useful Linux Tips and Tricks for more details.
Executing this command on its own lists all the defined alias' for your login. You can define a new alias thus alias lsbk='ls -al /tmp/backups
which means you can execute "lsbk" and it will actually execute the ls command. This is usually defined in your ".profile" file and is very handy.
This is an interesting command that can search for commands in the whatis database, which include the man files, so for example apropos who
will return things like "whois" and "whoami".
The awk
utility is very powerful text processing command and is fully documented at The GNU Awk User’s Guide however it is good for processing text. I will explain a simple example I sometimes use, where it works in a similar way to cut. An example is:cat /etc/hosts | grep local | awk '{print $2 " -maps to-", $1}'
Which looks for lines in your hosts file and prints the hostname followed by " -maps to-", then another space, which is triggered by the comma and finally the IP address, which is the reverse order of what is in the hosst file. Which is quick and simple.
This is the Linux copy command and generally works exactly as you would expect. However I have found that copying the contents of a directory with hidden or dot files in the root in is not so simple where the target directory exists. So I found doing two commands as follows works:cp -Rv ./drupal-7.34/* /var/www/html
cp -v ./drupal-7.34/.* /var/www/html
It is worth noting that "-R" means recursive, or in other words go down all the sub-directories, "-v" means, display everything you do, be verbose and -Rv is just both together. The first command copies all the files and directories, including dot files in subdirectories but misses dot files in "./drupal-7.34/", the second command does these but it does mention it could not copy the special directories "." and "..".
Another usful switch is "-a" which will do the copy preserving as much structure, attributes and SELinux information as possible, however it does ignore failures to preserve such information. Also "-d" will copy symbolic links rather than the files themselves.
More information can be found with this command: info coreutils 'cp invocation'
The default delimiter is the tab character and you can optionally specify which field or fields, so cut -d: -f2
means give me the second field where : is the delimiter. You can also specify the output delimiter, which defaults to the input one, try the following:cat /etc/passwd | cut -d: -f1,6-7 --output-delimiter=" - "
Which outputs the first, sixth and seventh fields of the passwd file but separated by " - ". However the following has the same effect:cut -d: -f1,6-7 --output-delimiter=" - " /etc/passwd
You can just specify the file, rather than pipe it in.
Simple little commend for doing DHCP stuff. If you have a Linux install without a desktop then you might find plugging a cable in does not automatically pick up an IP Address from the DHCP server, dhclient -v
will get an IP address, the -v is verbose mode, just so you can see what is going on. There is also dhclient -r -v
which will release the IP address. By default this works on all network interfaces, you can specify one, like this dhclient -v -r eth0
The dig command is the replacement for nslookup and is an excellent DNS lookup utility. The basic use is as follows:dig geoffdoesstuff.com any
This will get all the DNS info for the domain "geoffdoesstuff.com". If the command is missing on your distribution then you might need to install "dnsutils", this is certainly the solution with Debian. Some useful options are as follows:dig +nssearch geoffdoesstuff.com
- list the name serversdig +trace geoffdoesstuff.com any
- show how the domain information was founddig -x 1.1.1.1
- does a reverse lookup
This command converts a computers DMI (Desktop Management Interface) or SMBIOS (System Management) information into human readable form. So for example it will display motherboard make, model, version and serial number. One other example is of a specific use is:dmidecode -s system-serial-number
Clearly the man page contains more details as does running the command without the "-s" option.
The "disk usage" command is great for seeing how much space the current directory and all its children take up, it works off the current directory and specifying "-h" is always good to get the output in "human readable" format, rather than blocks. However sometimes you do not want a complete list of all subdirectories. So using du -h --max-depth=1
is a good option.
The command file
is the best way to find out about executable files. For example, it tells you whether they are 32-bit or 64-bit. For example if you execute file /usr/local/bin/uex
, which is UltraEdit, then the result is this:/usr/local/bin/uex: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped
However the command goes further, it works on all kinds of files including, scripts, certificates and more.
I use this most to find files, obviously, and whilst this is a powerful command I tend to just use it to search by filename, so to find something under the current directory find . -name 'settings.php'
. Sometimes though you want to search the whole machine, in which case I use find / -name 'mysql*' 2> /dev/null
, the points to note in this example are that I changed . meaning from current directory to / being the root, I also appended "2> /dev/null" which means redirect stderr to null, or in other words all the error messages are not displayed as you will inevitably have loads of errors about permissions when scanning the whole disk.
The find command just lists the name, if you want more detail like you would get from ls -l /tmp/filename.txt
then you need to do this:find / -name 'vsftpd.conf*' -exec ls -l {} \; 2>/dev/null
The "\;" is an escaped semi-colon and is needed to terminate the -exec option, also note that the filename found is put where the braces are.
A useful option for the find command is "-type ?" where ? is "f" for files and "d" for directories, see the man page for more options.
Handy little utility to get the amount of free memory, just execute free -h
for a nice summary, where -h gives "human readable" output, the default output is in kilobytes and -m and -g give output in megabytes and gigbytes respectively. It is worth remembering that buffers and cache are in effect "unused" in that the operating system is using this memory but can give it to applications. You need to read the row "-/+ buffers/cache" or see linux - Meaning of the buffers/cache line in the output of free - Server Fault for details.
Creates new groups, for example groupadd -g 500 grpname
. Clearly the number needs to be unused, ideally it should be unique. I believe groups are normally numbered 500 or above. If you want to know the highest currently used group number then the following will help:cut -d: -f3 /etc/group | sort -g | tail -5
The easy and proper way to see which groups a user is a member of groups usrname
, however if you leave off the user name then it runs for the current user.
If you are working on the command line rather than via a UI then this command is needed for extracting from a .gz file. For example gzip -d vsftpd-3.0.2.tar.gz
will extract the tar file from the gz archive but note that it will also remove the original .gz file.
If you just want to look at the first few lines of a file then the head command is handy, when cat would list the whole file. So head thefile.txt
will list the first part of the file, you can also specify how many lines you want with head -n25 thefile.txt
to get the first 25 lines of the file.
Note that the head and tail commands work in a very similar way.
This command, on Bash, shows you all the command in your history. If you use history 10
then you will see just the last 10 commands you used. If you want to know when the commands were run then executing export HISTTIMEFORMAT='%F, %T '
before using the history command will show dates and times in the output. With a command number from the hostory output, you can execute that command again by simply doing !1012
to execute the 1012th command again, handily !1012:p
will print the command rather than execute it.
Simple run this command and it will display the fully qualified domain name for the local machine. If you use hostname -I
you will see a list of IP addresses used by your machine.
The iconv command is a GNU one that does character set conversion, the following example will convert a "UCS-2 LE BOM" file to UTF-8:iconv -f UTF-16le -t UTF-8 UCS2-Test-File.txt > UTF-8-Output-File.txt
More complex conversions can be done with many other formats, execute iconv -l
to see the options on the specific system.
Executing this command on its own returns your user id and the id of all the groups you are a memeber of as well as SELinux information if that is on your distribution. If you execute id usrname
then you get the same information but on the user account specified.
The first point to note is that ifconfig
was deprecated in or before 2011 to be replaced by the more powerful ip
command. The way to do the equivalent of the base ifconfig command is as follows:ip addr sh
ip address show
- longer version of the above
Also worth looking up the commands ifup
and ifdown
for enabling or disabling network interfaces.
This is an interesting tool that you can install and it will test network performance. There are hosted servers but generally it is better to test on your network with your own server and client. Visit esnet/iperf: iperf3: A TCP, UDP, and SCTP network bandwidth measurement tool for details on iperf3, which is being developed in parallel with iperf2 now, it does include a link to iperf2. It is also worth looking at iPerf - The TCP, UDP and SCTP network bandwidth measurement tool for binaries, servers and documentation.
This is used to terminate (or kill) a background process, for example kill 10824
will send a terminate signal to the process with a process ID of 10824. If the process is having issues and that fails then kill -9 10824
will forcibly terminate the process.
When you have lots of similar processes to terminate then you can do killall nc
to terminate all the nc processes. Note that this command behaves differently on AIX.
The last
command basically scans the file /var/log/wtmp, just type last on its own and see. The file contains logins and reboots. Executing last oracle
shows all the logins for the oracle user listed in the file. A common use is last reboot
which shows all the reboot times listed in the file, alternatively last -1 reboot
shows just the latest reboot, however I think the output of last reboot | head -1
is cleaner.
This handy little command will show which libraries are dynamically linked to an executable. So ldd /usr/local/sbin/vsftpd
will tell you which libraries are used by vsftp. I have noticed on AIX that this command lists .a files which are archives, so you need to use the ar command to look inside those.
This is used for creating links or more commonly, symbolic links. Executing ln -s /media/sf_Linux
will create a symbolic link to /media/sf_Linux in the current directory called sf_Linux. To specify a different name for the symbolic link use ln -s /media/sf_Linux HostLinux
, which still works in the current directory.
It is also possible to put the link somwwhere else, for example you might want a utility on your path, so one easy option is the following:ln -s /var/opt/application/bin/util /usr/local/bin/util
This has the effect of putting "util" on the system path.
I recently wanted to list the subdirectories off the current directory, which is not as easy as it should be. Here are some options:ls -l | grep "^d"
- this is the "obvious" solution, find lines beginning with a dls -l -d */
- seems clunky putting "*/" on the end but it does work, however it cannot show hidden directoriesfind . -maxdepth 1 -type d
- nice option which works with hidden directories
This is a slightly obscure command to "list block devices" but what this means is it will list disks. The handy part is it will give the actual disk size, rather than the file system size, which could be smaller. It does show file system sizes, so you can see where you have unformated space.
Give a summary of the CPU architecture. This command has options to list offline CPUs as well as output in a parsable format.
This is a very handy command for seeing open network ports, however it looks at other things too. If you execute lsof -i
you will see a list of open ports and what has them open, if you see nothing, or suspect the list is short, then try again but using sudo sudo lsof -i
as it needs permission to see detail not belonging to ordinary users. If you run lsof -i -P
then you will see port numbers rather than port names.
If you want the complete list of just listening ports then try the following:sudo lsof -iTCP -sTCP:LISTEN
but do watch out as some port numbers will resolve to a name, so to speed it up and stop name resolution of hostnames and ports (although only ports is relevant for listening) add in -nP
, which in full gives you this:sudo lsof -nP -iTCP -sTCP:LISTEN
which works well on macOS.
See also: netstat
This is a handy little command to get help on other commands, you simply type man with the name of the command you are interested in as an argument. So help on the passwd command is obtained via man passwd
, you can then page down or press q to quit. On some machines you might get two commands with the same name and hence the wrong help! So man -a passwd
will show all the manual pages for all passwd commands. Note that passwd is the standard command to change passwords but also exists in OpenSSL to compute password hashes, so can have two entries.
Sometimes man pages can be a little long and confusing to read, in which case try using TLDR pages either by installing it or using online.
If you just type mkdir newdir
then the new directory "newdir" will be created in the current directory. Typing mkdir /tmp/newdir
will create newdir in /tmp, however it assumes /tmp exists. To create an entire tree then add a -p like this mkdir -p /tmp/one/two/three/newdir
which is convenient.
The mount command shows where all disks etc around mounted within the file system. It is also useful for seeing where a CD/DVD is mounted, especially when you switch between distributions and forget.
The nc or netcat command is very useful for testing network connections, open ports or anything TCP or UDP related. If you want to test access to a port on another machine thennc -v -w 5 192.168.56.101 5666
will help, if makes a TCP connection to port 5666 on 192.168.56.101, holding the connection open for 5 seconds before timing out and does so in verbose mode.
If you need to have something listening then nc -l -p 2112
will open port 2112 and listen for a connection. If you then telnet to that server/port you can type and see it on the server where the nc command was run, proving network connectivity.
There is a lot more but that is a good starting point.
This does a similar job to lsof and can be run as netstat -lptu
or netstat -lptun
if you prefer port numbers to port names. If you see - symbols in the "PID/Program" column then this is a permission issue, which can be resolved with sudo.
With macOS it is not possible to get the process which is listening on a port, so for this you need to use lsof.
A simple nproc --all
will simply display a number showing how many CPUs the machine has.
This is the NTP client, which probably needs to be installed and is easily run as follows:sudo ntpdate -u pool.ntp.org
The sudo is needed to change the local date/time.
Change your password or that of another user if specified as an argument
The rm
or remove command is capable of deleting files and directories, including directory trees. However you can delete your entire file system so do be careful but this is a good reason to not be running as "root", ever! Some useful paramteres are:
-f force, never prompt
-R recursive, so ideal for removing directories and all their contents
So a classic delete directory is this: rm -fR ./subdir
, which is a common use case.
This command is short for Secure Copy and it uses SSH to copy files from one server to another. An example syntax is scp root@192.168.56.102:/root/Downloads/nrpe-2.15.tar.gz root@192.168.56.101:/tmp/nrpe-2.15.tar.gz2
. If you are copying from your current logged in session to a remote machine you can shorten this slightly to scp /root/Downloads/nrpe-2.15.tar.gz root@192.168.56.101:/tmp/nrpe-2.15.tar.gz2
. The basic syntax here is scp source destination
.
If a file is specified then the sorted file is sent to standard out, there are plenty of handy options. If you are trying to sort numbers then using -g
will help as this will treat numbers as numbers. You can sort multiple input files into one single file, for example:sort output1.log output2.log output3.log > combined_output.log
The source command is a built in command that executes the contents of a file as a script, so source filename [arguments]
is the same as . filename [arguments]
. In both these cases the specified script is run in the current shell.
The socket statistics command is useful for displaying IP and Unix socket information, so you can see what is listening and much more. A common use is:ss -t -l
This gives all listening TCP sockets.
The stat command gives more detail information than ls -l
and is used simply with stat filename.ext
.
This command is used to "switch user", su fred
will switch to user fred. Using "su" on its own will switch to root. Note that su asks you for the user’s password. Using "-l" or "-" will effectively log you in as the user, executing the profile script and changing you to their home directory.
Used to run a single command with root privilege. The file /etc/sudoers is used to control who can run what commands via the sudo command. This is often used with su but can be used with other commands, thus giving granular control. The "classic" use is sudo su -
however leaving this open for any user is not recommended or even advised. Restricting the usage in the sudoers file is essential.
The tail command will show the last part of a file, for example tail thefile.txt
, you can specify how many lines like this: tail -n20 thefile.txt
. A very common use of tail is to "follow" the end of a log file, which is done with tail -f thefile.log
and you can combine this with a number of lines too if needed.
Note that the head and tail commands work in a very similar way.
Simple command to extract tar files or gzip files or gzipped tar files, however it does have some more complex arguments. For me a common use of the tar command is when working with Drupal. To extract a Drupal archive, just run tar -xzvf ./drupal-7.34.tar.gz
. However Drupal archives have all their contents in a sub-directory within the archive, so you end up with everything extracted to ./drupal-7.34/ in this example. However executing tar --strip-components=1 -xzvf ~/Downloads/drupal-7.34.tar.gz
will strip this first level, which is very handy.
To add stuff to a new tar, use something like the following: tar -cvf new.tar /tmp/stuff
This will put /tmp/stuff and any sub-directories into the file "new.tar". Note the following switches:
-c
create a new archive-f
means use archive name provided-t
list files in the tar-v
means list name of each file-x
means extract from the tar-z
to process the archive through gzipBy default tar will recurse through all the sub-directories.
This is a network traffic capture tool. It can dump everything or filter to only show some stuff. If you want to see everyting going to/from a specific website then try this:tcpdump host www.example.com
There are many more options to dump to file, filter based on NIC, source, destination and so on.
It is important to note that this utility will not work unless it is running as root or with sudo, in fact it may even be missing from the path for a regular user.
This is actually a very handy command for using in scripts, for example test -n "$BASH_VERSION"
tests if the length of the script is non-zero, however it also does a lot of file and directory related test.
This command will "translate" characters and can do a number of clever things, like convert case, remove special characters etc. One nice use is the display the PATH or Java ClassPath with each item on its own line. Try this:echo $PATH | tr ':' '\n'
The tr utility can also do "dos2unix" conversion, you just need to pipe through tr -d '\015'
this will remove Ctrl-M characters that you might see in vi. I prefer using "\015" as this is easier to type in than Ctrl-M, which needs you to press Ctrl-V first.
The uname command is useful for getting system information, for example uname -r
gives the Linux Kernel version. You can get the same information with cat /proc/version
, however uname does give other information.
This can remove adjacent duplicates and leave the unique line or remove them all and there are options for counts etc.
This command removes an environment variable set with export, so to set and then unset an environment variable you would do the following:export http_proxy=http://server:port
unset http_proxy
This command shows how long a box has been running, however I have found it gets confused when running in a VM which you pause or put to sleep for a few days. If you just want the simple, how long then uptime -p
works nicely and uptime -s
displays when the box started up.
Create a new user, for example useradd -c "Geoff Lawrence" -d /home/geoff -m -u 50001 -U geoff
, which creates a new login called "geoff", however the parameters need explaining:
-d home_directory
: specify the home directory for the new loginHandy command to grant group membership to a user. Note the capital G, the lowercase g changes the default group which is not usually what you want.
usermod -a -G grpname usrname
This is a handy alternative to "who" which lists logged on users.
When you need to do things in parallel this is a priceless command. The syntax is basically "wait" followed by one or more process ids or pids, so for example:wait 2112
The list does not need to be comma separated and you can get the pid of the previous command by using $!.
The wc command will, quite simply print the number of newlines, words or bytes for a given file or the pipeline, for example.wc -l readme.txt
- this will print the number of lines in the readme.txt filecat readme.txt | wc -l
- does the same as the previous command
The who command has multiple uses, when used on its own it lists all the currently logged on user sessions. If you do who am i
then you will just see your session and then whoami
is a similar command that returns just your username. There is also the following:
who -b
- show the system boot timewho -q
- show the logged in user names and how many people are logged inwho -u
- show full details of each logged in userThere is more but that's for another day!