Understanding “Network Bonding/Teaming” in Redhat Linux

How to configure Network Bonding/Teaming in Red Hat Linux

As a system admin we would like to avoid the server down by having the redundancy for the “/” filesystem by using the RAID technology(MIRRORING THE DATA), then multiple FC links to SAN technology with the help of Multipathing software and many more.How do you provide the redundancy in network level?As all, we know having multiple network card (NIC) will not provide any redundancy if either  NIC1 or NIC2  failed then it may lead to network downstate.

In RedHat Linux with the help of Bonding/Teaming, we could accomplish the network level redundancy.Once you have configured the bonding with the help of two NIC cards, then any failure occurs on any one of the NIC cards the kernel will automatically detect the failure of NIC  and it works safely without any issues.Bonding could be also used for the load sharing between the two physical Links.

The dig shows how Bonding is working

Let me show now how to configure network bonding in RHEL

Task: Configure Network bonding between eth0 and eth1 with name of bond0

Bonding driver: Linux allows binding of multiple network interfaces in to a single channel  NIC by using kernel module called Bonding
Tips: The behavior of the bonded interface depends upon the mode(mode provides either hot standby or load balancing service.

Make sure you have two physical Ethernet cards available in your Linux server

Step:1 Check the network adapter details
#ifconfig |grep eth

As you can see from the above output we have two Network adapters with the logical name eth0 and eth1.

Step:2 Edit the configuration file for both the ethernet cards as follows
#vi  /etc/sysconfig/network-scripts/ifcfg-eth0

add the following lines inside this file

Do the same for another interface eth1

Step:3 Create a “bond0” configuration file
#vi  /etc/sysconfig/network-scripts/ifcfg-bond0

add the following parameter as shown below

you will not find the /etc/modprobe.conf in RHEL6, so you need to define your bonding option inside the above configuration file(highlighted with yellow box)

We can configure NIC bonding for various purpose, so when you do the configuration you will have to specify the purpose for which you want to use the bonding.Here are the modes available with the bonding

1.balance-rr or 0: Set a  round-robin policy for fault tolerance and load balancing.Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.

2.active-backup or 1: Set an active-backup policy for fault tolerance.Transmissions are received and sent out via the first available bonded slave interface, another bonded slave interface is only used when the active bonded slave interface fails.

3.balance -xor or 2: Sets an exclusive policy for fault tolerance and load balancing.In this method, the interface matches up the incoming request’s MAC address with the MAC address for one of the slave NIC’s.Once the link is established, transmissions are sent out sequentially beginning with the first available interface

4.broadcast or 3: Sets broadcast policy for the fault tolerance, All transmissions are sent on all slave interfaces.

Understanding miimon in network bonding: It specifies(in milliseconds) how often MII link monitoring occurs.This is very much useful when high availability is required because  MII is used to verify that the NIC is active.

 

To  check that the driver for a particular NIC  supports the MII tool,run the following command
#ethtool  <interface name> |<grep "Link detected"

#ethtool   eth0 |grep "Link detected"

as you can see from the above screenshot driver supports the MII tool.

Step:4 Load the bonding module
#modprobe  bonding

Step:5 Restart the network interface to make the changes update
#service network restart

Step:6  Confirm whether your  configuration is working properly or not by using the following command
#cat /proc/net/bonding/bond0

As you can see from the above screenshot, NIC bonding interfaces are in active state.

Step:7 Verify whether “bond0” interface has come up with IP or not
#ifconfig -a

The above screenshot has confirmed the bonding interface has the IP address and it is in running state.

You can also notice eth1 and eth2  have flag “SLAVE” and for bond0 interface has flag “MASTER”

To verify the current bonding mode, run the following command

#cat  /sys/class/net/bond0/bonding/mode

From the above output, the current mode is balance-rr  or 0

To check the currently configured bonds
#cat  /sys/class/net/bonding_masters

The above screenshot says we have one master bond with the name “bond0”

Note: So from now onwards even if anyone of your NIC adapter failed, the bond0 interface will continue running and provides the uninterrupted service to the clients. The failed interface flag will be changed to “down”  state and after resolving the issue with the failed interface the flag again will change its state to “Running”.

I hope you have enjoyed this article, Kindly subscribe 👉🏿👉🏿 Click this link to Subscribe

Never miss any updates from my blog do like my FB page here 👉🏿👉🏿Learn Linux in an easier way

 

**********************Thank you**************************************************************************

About Author:

Hello readers! Let me introduce my self first. My name is Vasanth Nirmal Singh J S having 9+ years of experience in IT on all flavours of Unix operating systems ,Storage's and many more .. I would like to share my technical experience i have come across - can be help to other people. So in this blog, I'll post my thoughts related to ITIS. I'll share experiences that I've had while working in different environments. You can expect content related to Unix,Solaris,Linux,EMC Storeages,HP-UX and many others. I hope this blog can be useful for you! Your comments will be appreciated!

Leave a Reply

Your email address will not be published. Required fields are marked *