What is Run Level in Linux?

Run-level is a system state on  how the process is starting and what process to run ,which service to enable and disable while booting is decided by the Run-level program.While booting process after the kernel has started the init program it reads the /etc/inittab file where  the default run-level entry saved. and then it starts all the services.

There are totally seven run-levels available in UNIX ,according to the OS  the run-level number may vary.

Types of Run-levels:

In Linux by default it boots either Run-level 3 or run-level 5.You can also modify or switch to different run-levels as per the  needs.

To check the current Run Level settings:
#who  -r


Sample output for the above command would be

To check the current and previous Run-level details

Sample output for the above command would be


In the above output ,“N” indicates the run-level has not been changed since the system was booted.“3” is the current run-level

To change the default Run-level 

/etc/inittab file holds the default run-level entry,open this file with the vi editor and change the run-level number to your desired one as follows,

Here am going to change the run-level from 3 to run-level 5

#vi /etc/inittab

Modify the line as follows

remove 5 and add 3 in the above line,after update reboot the system to login n to the new run-level

To reboot use the following command
#init 6

I hope you find this guide useful

More good stuffs to come,Stay tuned!!!!

Mail me your queries to vasanth@linuxvasanth.com


For More Videos Subscribe My Youtube Channel  Linux Vasanth
If you found this article useful, Kindly Subscribe here 👉  Click this link to Subscribe






How to configure NFS server and NFS Client in Redhat Linux?

NFS(Network File System) developed by Sun Microsystem, for sharing the files and directories between the UNIX/Linux systems.NFS it allows you to mount your local filesystems over a network and also remote server to interact with them as they are mounted locally on the same system

NFS is purely based on the RPC(Remote Procedure Call) which allows the client to automatically mount the remote filesystems.

Advantages of NFS:

1.No manual refresh needed for new files

2.With this NFS it is Not necessary  that both the machines run on the same OS

3.secured with firewall and Kerberos

4.Files may be accessed via IP addresses, groups, users etc

5.The central management of this system would cut the workload by 80%


1.The greatest disadvantages are the issue of security because NFS is based on RPC, remote procedure calls, it is inherently insecure and should only be used on a trusted network behind a  firewall.

Let us see some of the important services that are needed  for NFS

1.nfs = It translates remote file sharing requests in to request on the local filesystem

2.rpc.mountd = This service is responsible for mounting and unmounting the filesystems

Configuration files for NFS:

1./etc/exports = Important configuration file for NFS, all exported files and directories are defined in this file at the NFS server end.

2./etc/fstab = To mount the NFS share resources automatically on system reboots, we need to manually put an entry inside this file

3./etc/sysconfig/nfs = Configuration file for NFS to control on which port RPC and other services are listening

Check NFS daemon is listening on both UDP and TCP  port 2049:
#rpcinfo -p   |grep nfs

From the above output, it is confirmed that NFS server is running and accepting calls on port 2049.

Check whether your system supports NFS or not:
#cat  /proc/filesystem  |grep nfs

Note: If you don’t see any output means NFS is not supported  or NFS module have not been loaded in to your kernel

To load the NFS module:
#modprobe  nfs

When everything is installed correctly, the NFS daemon should be now listening on both UDP and TCP 2049 port and the portmap should be waiting  for instructions on a  port 111

Check portmap is listening or not
#rpcinfo -p |grep portmap
Configure NFS server:

Setup details:

1.NFS server:  hostname=linuxvasanth.com, IP address

2.NFS Client: hostname: Dataserver, IP address=

As I said above for sharing the directory we need to make an entry in “/etc/exports configuration file.In this example, i will share a directory name “myshare” in “/”  partition to share with the client-server

#mkdir  /myshare
Create some files and directories inside this directory
#cd  /myshare

#touch doc1 doc2 doc3

#mkdir d1 d2 d3

Now /myshare directory is having three files and three subdirectories.

Step:1 Make an entry in “/etc/exports” to make the directory shareable
#vi  /etc/exports



The above entry says the directory myshare from “/” is being shared with the client IP with read and write permission with the sync option.You can also use the hostname in place of the IP address.

NFS sharing options:

ro = can provide read-only access to the shared files,i.e the client can only able to read

sync: It confirms the requests to the shared directory only once the changes have been committed

no_subtree_check = It prevents the subtree checking when a shared directory is the subdirectory of the larger filesystem, NFS performs scans of every directory above it, in order to verify its permissions and details, disabling the subtree check may improve the performance of NFS but it reduces security

Note: The default behavior of NFS kernel daemon is to include additional option to export your line which is “no_subtree_check”

Step:2 Restart the NFS daemon

Once you have edited /etc/exports file you need to restart the NFS daemon to apply any changes

Note: Depending upon your Linux distribution restarting procedure for NFS daemon may differ

To restart the NFS service
#service nfs restart

To restart the RPC bind service
#service rpcbind restart

NFS and rpcbind are compulsory services for NFS daemon

rpcbind = Remote procedure call(RPC) service is controlled by rpcbind service

To list the NFS shared directories locally and remotely:
#showmount  -e

If this command shows error then the communication might be blocked by the firewall.

Configuring NFS client:

Now at the NFS client end, we need to mount that directory on our server to access it locally, to do this first we need to find out the shares available on the NFS server.

To mount the shared NFS directory
#mount  <option>  <NFSserver IP>:<NFS shared directory path>  <mount point directory path at NFS client>
To view the shared resources from NFS server:
#showmount   -e   <NFS server IP>

#showmount  -e

from the above output, one directory is shared in NFS server

Create a new mount point directory:
#mkdir  /data

Now mount the NFS share directory to your local mount point directory as follows,

#mount   -t  nfs     /data

To check the mounted files
#df  -h

As you can see from the above output the shared directory from NFS server has been successfully mounted on NFS client at the location /data

To remove the NFS mount
#umount  /data

#df -h


The following services are associated with NFS daemons and each service have its script files in init.d directory

1./etc/init.d/nfs =This is the main control script for NFS daemons which controls NFS services

2./etc/init.d/nfslock = Script for lock files and statd daemon, which locks and provides the status of files those are currently in use

3./etc/init.d/rpcbind = RPC program number converter

4./etc/init.d/rpcgssd = script for RPC related security services

Note: If you want to start a script manually you can execute by using the following syntax

#/etc/init.d  <script-name>

Ex: /etc/init.d/nfs


If you found this article useful, Kindly Subscribe here 👉🏿👉🏿  Click this link to Subscribe

Never miss an article Do like my official  FB page 👉🏿 Learn Linux in an easier way



Understanding “Network Bonding/Teaming” in Redhat Linux

How to configure Network Bonding/Teaming in Red Hat Linux

As a system admin we would like to avoid the server down by having the redundancy for the “/” filesystem by using the RAID technology(MIRRORING THE DATA), then multiple FC links to SAN technology with the help of Multipathing software and many more.How do you provide the redundancy in network level?As all, we know having multiple network card (NIC) will not provide any redundancy if either  NIC1 or NIC2  failed then it may lead to network downstate.

In RedHat Linux with the help of Bonding/Teaming, we could accomplish the network level redundancy.Once you have configured the bonding with the help of two NIC cards, then any failure occurs on any one of the NIC cards the kernel will automatically detect the failure of NIC  and it works safely without any issues.Bonding could be also used for the load sharing between the two physical Links.

The dig shows how Bonding is working

Let me show now how to configure network bonding in RHEL

Task: Configure Network bonding between eth0 and eth1 with name of bond0

Bonding driver: Linux allows binding of multiple network interfaces in to a single channel  NIC by using kernel module called Bonding
Tips: The behavior of the bonded interface depends upon the mode(mode provides either hot standby or load balancing service.

Make sure you have two physical Ethernet cards available in your Linux server

Step:1 Check the network adapter details
#ifconfig |grep eth

As you can see from the above output we have two Network adapters with the logical name eth0 and eth1.

Step:2 Edit the configuration file for both the ethernet cards as follows
#vi  /etc/sysconfig/network-scripts/ifcfg-eth0

add the following lines inside this file

Do the same for another interface eth1

Step:3 Create a “bond0” configuration file
#vi  /etc/sysconfig/network-scripts/ifcfg-bond0

add the following parameter as shown below

you will not find the /etc/modprobe.conf in RHEL6, so you need to define your bonding option inside the above configuration file(highlighted with yellow box)

We can configure NIC bonding for various purpose, so when you do the configuration you will have to specify the purpose for which you want to use the bonding.Here are the modes available with the bonding

1.balance-rr or 0: Set a  round-robin policy for fault tolerance and load balancing.Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.

2.active-backup or 1: Set an active-backup policy for fault tolerance.Transmissions are received and sent out via the first available bonded slave interface, another bonded slave interface is only used when the active bonded slave interface fails.

3.balance -xor or 2: Sets an exclusive policy for fault tolerance and load balancing.In this method, the interface matches up the incoming request’s MAC address with the MAC address for one of the slave NIC’s.Once the link is established, transmissions are sent out sequentially beginning with the first available interface

4.broadcast or 3: Sets broadcast policy for the fault tolerance, All transmissions are sent on all slave interfaces.

Understanding miimon in network bonding: It specifies(in milliseconds) how often MII link monitoring occurs.This is very much useful when high availability is required because  MII is used to verify that the NIC is active.


To  check that the driver for a particular NIC  supports the MII tool,run the following command
#ethtool  <interface name> |<grep "Link detected"

#ethtool   eth0 |grep "Link detected"

as you can see from the above screenshot driver supports the MII tool.

Step:4 Load the bonding module
#modprobe  bonding

Step:5 Restart the network interface to make the changes update
#service network restart

Step:6  Confirm whether your  configuration is working properly or not by using the following command
#cat /proc/net/bonding/bond0

As you can see from the above screenshot, NIC bonding interfaces are in active state.

Step:7 Verify whether “bond0” interface has come up with IP or not
#ifconfig -a

The above screenshot has confirmed the bonding interface has the IP address and it is in running state.

You can also notice eth1 and eth2  have flag “SLAVE” and for bond0 interface has flag “MASTER”

To verify the current bonding mode, run the following command

#cat  /sys/class/net/bond0/bonding/mode

From the above output, the current mode is balance-rr  or 0

To check the currently configured bonds
#cat  /sys/class/net/bonding_masters

The above screenshot says we have one master bond with the name “bond0”

Note: So from now onwards even if anyone of your NIC adapter failed, the bond0 interface will continue running and provides the uninterrupted service to the clients. The failed interface flag will be changed to “down”  state and after resolving the issue with the failed interface the flag again will change its state to “Running”.

I hope you have enjoyed this article, Kindly subscribe 👉🏿👉🏿 Click this link to Subscribe

Never miss any updates from my blog do like my FB page here 👉🏿👉🏿Learn Linux in an easier way


**********************Thank you**************************************************************************

How to Unmount a Busy Filesystem in Linux

In our previous tutorial I have explained the concepts of mounting and unmounting the filesystems, Now let us see how to unmount a busy filesystem …In Linux/Unix If a device is reporting busy then it won’t let you bring the device to inactive state, the file system will report busy (umount /dev/***: device is busy)when you  try to unmount that it could be  due to various reasons,
1.When more users are accessing that filesystem.
2.Any media mounted in that mount point(CD/DVD/FLOPPY/USB).
So bringing those filesystems to Unmount state without any data loss is the challenging for most of the system admins.
We have a utility called “fuser”  it helps us to unmount a busy filesystem without any data loss.

What is meant by fuser?

fuser helps us to identify the processes that are currently accessing the filesystem by giving the owner name for the processes, the process id number and much more…With this utility, we can also apply the options to get the brief details from the fuser output.Here are some of the important options we use frequently with the fuser utility.

k – Kill the process
c – Current Directory
e – Executable file being run
v – Verbose output
u – To get the username.
Let me show you how to unmount the busy filesystem with the help of “fuser” utility.

#fuser  <option>   <mount point directory path>


#fuser  <option>  <device name or filesystem>

Ex:1 Unmount the busy filesystem.

On my disk I have a  filesystem /dev/sda2 and it is mounted on the mount point directory /home, As all, we know /home is the default home directory for the normal user logins Let me first log in to the server as a normal user(nirmal), then after that as a root user I will try to unmount the /home filesystem ,obviously it will give you the output as the “Device is busy” as all the initialization files will run from this directory to create the user login desktop.

Check the mounted filesystem details

#df -h

From the above output, the filesystem /dev/sda2 is mounted on the directory /home

Umount the /dev/sda2 filesystem

You can either use device name  or mount point directory to unmount

#umount  /dev/sda2


#umount /home

To Learn Mount and Unmount  concepts click this link—> Mounting and Unmounting

The above output says the device is busy since it is accessing by some process, Now check how many processes currently occupying the filesystem.

Identify the processes occupying the current directory

#fuser -c  /home

From the above output  the numerical value indicates the “PROCESS ID”  and character “c” means  the “Current Directory”, so currently, two processes are running on the filesystem, Now let us try to kill the processes that are occupying the current mount point directory by running the following syntax,

#fuser -ck   /dev/sda2

k –>kill

Check whether the running process successfully killed or not,

#fuser -c /dev/sda2

From the above output it is confirmed all the process killed by the fuser, Now try to unmount the filesystem

#umount /dev/sda2

Now this time you will not see the device busy error

Now confirm /dev/sda2 is unmounted or not by running the following command

#df -h

From the above output, the filesystem /dev/sda2 successfully unmounted.

Ex:2  Display all the Processes that are using the current Directory

#fuser   .

Here “.” indicates the current working directory

From the above output, we can see more processes are occupying the current directory.

Ex:3 Check with the -v verbose output

#fuser -v  .

The output now displayed the owner name of the process, PID  and much more in a separate column.
Note: You can also use the -u option with the “fuser” command to get the owner list for all the processes that are occupying your current directory

#fuser -cu  /home

Ex:4 Display which Processes using the executable

In this example let me try open the firefox page on my server by using the command “firefox” and after that let us check whether “fuser” identifies the executable file path from this firefox program.


Now, I will get the path for the executable program(firefox) by running the following command,

#ps  -aef  |grep firefox

From the output the first line shows the executable path for the Firefox, we will use this path with the fuser now,

#fuser  /usr/lib64/firefox-3.6/firefox

 The output shows the PID of the process and “e” indicates the file is an executable one.

Ex:4 Umount the filesystem with “-f” option

You can also unmount a busy filesystem with the -f option(forcefully), But remember running the following command will put your filesystem in maintenance state  or data loss also may occur as it will forcefully kill the running process ,So it is highly recommended before you test this in your production box take a full back up of the particular filesystem so that if any data loss occurs you can restore it back later .
Note: Programs which access the files will get an error after unmounted with -f option.

#umount -f /home



If you found this article useful, Kindly Subscribe 👉🏿👉🏿Click this link to Subscribe