Top 7 Linux Linux Scenario Based Interview Questions and answers(L1&L2)

Linux Scenario Based Interview Questions and answers
1.What is Ulimit and umask?

ulimit is an inbuilt default Linux command which is used to control over the resources available to shell and processes .

umask(User File Creation Mask) which is used to control the permissions assigned for the files and directories.

2.What is Run-level in Linux and how to change them?

A run-level is a system state of init  and the whole system that defines what system services are operating and they are identified by numbers in Linux for different purpose.

To change the runlvel you will need to edit the “/etc/inittab” file and change the default init entry “id:5:initdefault

You can switch to different run-levels with the help of “init” command followed by the run-level number (init  3)

3.Scenario 1: On one of my production server Linux,the storage team has extended   the partition from their end.Now how to re-scan the partition and extend without rebooting from Linux.

Answer:In my case 6th Disk on Controller-1 was extended by the storage team,first re scan it by using the following command

#echo 1 >/sys/class/scsi_device/device/rescan


Now resize the pv by using the following command

After this check the size of the volume group this time it should display the new extended size and with the help of “lvextend” command now we can easily extend the size of lv partition.

4.What is the maximum Length  for a file name allowed in Linux?

Any file name can have  a most extreme of 255 characters.This farthest point  does exclude the path name,so accordingly the whole path name  and file name could very much surpass the 255 characters.In interview ,the interviewer asks this  question to confuse the candidate by further asking if the mentioned includes the  path name.So get prepared with the complete answer.

5.What is Network Bonding and explain?

It is the process of bonding or joining two or more than two network interfaces to create one interface.It helps in improving the bandwidth,Load balancing,redundancy as in case of any of the interfaces is down,the other one will continue to work

Visit My Website to know more about Network Bonding Understanding Network Bonding
6.Scenario: On one of my Linux server the sybase database is not running because of the tmpfs. Sybase team wants to extend the size of tmpfs file system from 1.5GB to 4GB

Now check the tmpfs file-system details

#df  -h

Now edit the /etc/fstab file to increase the size

Change the size as shown below

Remount the filesystem as shown below

#mount - remount  tmpfs

#df -h

As you can see from the above output now tmpfs size has been extended to 4G

7.Sceneriao: df and du command shows different  Disk usage

Solution: This could be because of open file deletion i.e, when someone deletes the log file that is being used or open by other process if we try to delete this file then the file name will be deleted but its inode number and data will not be deleted

By using the “lsof” command we can get the list of open files,run this command under /var path to get the details

#lsof  /var |egrep  "^COMMAND|deleted"

To release the space ,we could kill the command by using its PID number.

I hope you find this guide useful

More good stuffs to come,Stay tuned!!!!

 Mail me your queries to
For More Videos Subscribe My Youtube Channel  Linux Vasanth
If you found this article useful, Kindly Subscribe here 👉  Click this link to Subscribe


Monitoring Commands in RedHat Linux Servers

Important Monitoring commands in Linux

For a system and network administrator, it’s very tough to debug and monitor the Linux servers activities and performance daily. In this tutorial, I have compiled some important monitoring commands that might be useful for the Linux/UNIX administrators. All these commands are available under all flavors of UNIX and these commands are very much useful in probing the cause for the errors.

1.vmstat(Virtual Memory Statistics):

This command will display the statistics of virtual memory, CPU activity, IO Blocks, Kernel threads and many more.

Some Linux distribution will not be having this command by default, You will need to install the systat package which contains the vmstat command.


2. To check the Active and Inactive Memory Details:
#vmstat -a

From the above output, you can check the active and inactive memory details, the column si and so indicates the following meaning,

si = Swapped in every second from disk in kilobytes

so = Swapped out every second to disk in kilobytes

free = Total free memory spaces
3.lsof(List of Open Files):

This command is very much useful in analyzing which processes are accessing and opening the files, and the open files include are Disk files, Pipes, Devices, Network sockets. For example when you trying to unmount a filesystem and if it not unmounting which means some process is accessing that filesystem, to check which processes are accessing the filesystem we can run this “lsof” command to get the full report. With this command, we can easily identify which files are in use

4. To list all open files

From the above output,t it showing the long listing of open files

FD =File Descriptor and under this we will have some values ,

CWD =Current working directory

rtd = Root directory

mem = Memory mapped file

txt = Program text(Data and code)

TYPE of files and its identification

DIR =Directory

REG = Regular file

To learn more about “lsof” command visit this link Importance of lsof command

For More Linux Tutorial Videos visit my Youtube channel 👇🏻👇🏻

5.tcpdump (Network Packet Analyzer):

The tcpdump is the most useful command line  Network packet analyzer or packets sniffer program which is very much useful in capturing the TCP/IP packets that received or transferred on a specified network adapter over a network. This tool has also an option to save the captured data too a file for further analysis.

6. To capture the packets from a specific interface :
#tcpdump   -i   eth0

eth0 = Logical name of the network adapter,0 indicates the first  adapter 

Cancel the program by pressing ctrl+c, you will see the below output,

Note: This command saves the output in "pcap" format which can be viewed only by the "tcpdump" command
7. To capture only “N” number of packets:

By default the “tcpdump” command captures all the packets for the specified interface until you cancel the program, now by using one special option “-c”  you can capture the specified number of packets.

Below example  captures only 4 packets

#tcpdump   -c 4 -i eth0

8. To check the Number of Interfaces in you Server, run the following command
#tcpdump  -D

8.To capture and save the Packets in a File:
#tcpdump   -w   mylog.pcap    -i   eth0

mylog.pcap= filename along with the extension .pcap

9.To View the Captured Packet Files
#tcpdump   -r    mylog.pcap

10.To Capture Packets from a specific Port:

For example, To capture the packets from the “ssh” port, run the following command,

#tcpdump   -i  eth0  port  22

11.Netstat(Network Statistics):

This command is very much useful in monitoring the Incoming and outgoing packets and also you can monitor the interface statistics. When you are having connectivity issues to your server the first most thing is you need to check the port is in listening or non-listening state, that can be done by using the netstat command. This command is very much useful for the network administrators to check and analyze the network related problems.

12. To check all Listening ports of TCP and UDP Connections:
#netstat -a  |more

From the above output from the IP, one client is connected to my server via ssh port and the connection status is ESTABLISHED

13.To List only TCP connection details
#netstat -at

14.To Display the Full Statistics by Protocols:

By default, the statistics can be displayed only for the TCP, UDP, ICMP, and IP protocols, The -s option is used to specify a set of protocols

#netstat  -s

You can check the full statistics by protocols like Number of active connections, the total number of packets received, dropped and many more.

15.To display the statistics by TCP Protocols.
#netstat  -st

You can check the total number of active connections and failed attempts via this protocol and many more you can get from this command.

For Linux, Tutorial Videos visit my YouTube channel Linux Vasanth

16.IOTOP Command:

This command is very much similar to the “top” command, the only difference is with iotop you can check the real-time disk I/O and processe. This command is useful to find the exact process and high used Disk read/write processes

I hope you have enjoyed this tutorial if so Kindly subscribe and share it with your friends.
                                          🙏🙏 Thank you 🙏🙏
For More Videos Subscribe My Youtube Channel  Linux Vasanth
If you found this article useful, Kindly Subscribe here 👉  Click this link to Subscribe



Managing Solaris OS FileSystem

A Filesystem is nothing but it is a collection of files and directories that make up a structured set of information.Solaris OS supports three different types of Filesystem

1.Disk-Based Filesystem

2.Distributed System

3.Pseudo filesystem

Let us see the types one by one in detail

Disk-Based Filesystem:

These types are found on the harddisks, CDROM’s, DVD, floppy.The following are the examples  of the disk-based filesystem

UFS –Unix File System is the default file system in Solaris OPerating system and it is based on the Berkeley fast filesystem

hsfs – High Sierra file system is a special type of filesystem developed for the use of CDROM’s media

pcfs – PC filesystem is the UNIX implementation of the DOS(Disk Operating System) FAT32 filesystem.The pcfs filesystem it allows the Solaris OS to access the PC-DOS formatted filesystems.It allows the users to use the UNIX commands for direct read and write access to PC-DOS files.

udfs – Universal Disk Formatted Filesystem is used for the optical storage targeted at DVDROM media.This filesystem allows universal data exchange and support read and writes operations.


Distributed Filesystem:

This filesystem it gives the network access to the file system resources

NFS- Network File system allows the users to share the files among many types of system on the network NFS filesystem makes part of the filesystem on one system appear as though it were part of the local directory tree.

Pseudo filesystem: 

These are the memory based filesystem.These filesystems provide for better system performance and also giving access to kernel information.The pseudo file system includes the following

1.tmpfs = The temporary file system stores files in memory, which avoids the overhead of writing to a disk-based filesystem.The tmfs filesystem is created and remob=ved automatically every time the system is rebooted.

2.swapfs = The swap filesystem is used by the kernel to manage the swap space on disks.

3.procfs = It holds the list of ongoing active processes under the /proc directory .all the processes are listed by a process number.all the information from this directory can be fetched with the ps command.

4.mntfs = The mount filesystem provides the read-only information from the kernel about the locally mounted filesystem details.

5.devfs = This filesystem is used to manage the namespace of all devices on the system This file system  is mainly used for the /devices


So in our next article, I will show you how to create partitions in Solaris.

If you found this article useful, Kindly Subscribe here 👉🏾 Click this link to Subscribe
Never miss an article Do like my official  FB page 👉🏿 Learn Linux in an easier way


Learn Solaris OS Device Naming conventions

In Solaris operating systems all the devices are identified with the three different names, Let us see what all are the  types of names available

1.Logical device name or Block disk devices

2.Physical device name or Character disk devices.

3.Instance name

Logical Device Name:

A user can access the hardware or device with this logical names, after login to the operating system if a user needs to access the system devices he/she has to access  the device with the logical name.So a logical device name is used to refer to a device when you are entering the commands on the command line

All Logical device names are kept in the directory path /dev and these logical device names are symbolically linked to the physical device names under the path /devices directory.So all the devices have an entry inside the /dev/dsk (logical device name path)and /dev/rdsk (physical device name path).

rdsk means RAW DISK

The logical device name contains the controller number, target number, disk number and slice number i.e c#t#d#s#

To check all the logical device names run the following command

# ls /dev/dsk
c0t0d0s0 c0t0d0s4 c0t2d0s0 c0t2d0s4 c1t1d0s0 c1t1d0s4
c0t0d0s1 c0t0d0s5 c0t2d0s1 c0t2d0s5 c1t1d0s1 c1t1d0s5
c0t0d0s2 c0t0d0s6 c0t2d0s2 c0t2d0s6 c1t1d0s2 c1t1d0s6
c0t0d0s3 c0t0d0s7 c0t2d0s3 c0t2d0s7 c1t1d0s3 c1t1d0s7

c0t0d0s0 to c0t0d0s7  = Represent the device name for the disk slice0 to slice 7 for a disk that is attached to the controller 0 at target 0, on disk unit 0.

c0t2d0s0 to c0t2d0s7 = Represent the device name for the disk slice0 to slice 7 for a disk that is attached to the controller 0 at target 2, on disk unit 0.

c1t1d0s0 to c1t1d0s7 = Represent the device name for the disk slice0 to slice 7 for a disk that is attached to the controller 1 at target 1, on disk unit 0.

Note: On X86 hardware you will not find target, target shows only on SPARC hardware.

Physical Device Names:

The physical device name is nothing but it has the device hardware location i.e the complete PCI address of a device, the physical addresses contain the series of nodes which is separated by slashes, that indicates the path to the devices.All the physical devie names are kept under the /devices directory.

To check all the physical device name details

#ls  -l  /dev/rdsk

To list a individual disk hardware path details

#ls -l /dev/dsk/c0d0s0

Note: Am running the Solaris server fromX86 hardware that is why from the above output it is not showing the target id.

3.Instance Names:

Kernel will assign a shorten name for all the available devices  that are connected to the server that is called as an instance name or we can say like it is a shortened name for the physical device name

Let me show you this with one example:

1.sdn = which means here sd is the disk name and n is the number, such as sd0 for the first SCSI disk  device

2.dadn = which means here dad is the disk name and n is the number, such as dad0 for the first IDE disk  device

for example run the ls -l  /dev/rdsk command to get the physical path details from that output you can find the instance name as below


As you can see from the above screenshot sd shows it is an SCSCI disk and the disk number is 0.

How to List the system devie details?

In solaris operating system there are several ways avaiable to list the device physical path information.Let us see that one by one


As I explained above the instance name for the devices, For each and every devices the  system stores its physical name and instance name inside the /etc/path_to_inst file.These names are used by the kernel to identify the devices.This file is maintained by the kernel and it is not recommed to edit this file for any purpose

Let me show the entires inside the /etc/path_to_inst file below

Note: Different system have different physical device paths

The following is  a /etc/path_to_inst file from a system that has a diffrent bus architecture, here in this case it is an example of an system that has  onboard sun system bus(SBus)

# cat /etc/path_to_inst
# Caution! This file contains critical kernel state
“/sbus@1f,0” 0 “sbus”
“/sbus@1f,0/espdma@e,8400000” 0 “dma”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000” 0 “esp”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@3,0” 3 “sd”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@2,0” 2 “sd”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@1,0” 1 “sd”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@0,0” 0 “sd”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@6,0” 6 “sd”
“/sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@5,0” 5 “sd”

2. The prtconf command

prtconf means Print configuration  to get all the system configuration details like Total memory installed, configuration of the peripheral  which is formatted as device tree.The main adavantage of prtconf is it will display all possible instances of devices, wherether the device is attached to the system or not.


This command display all possible instances of devices, wherether the device is attached to the system or not.

If you dont want to see the devices which are not attached you can use the option -v with the prtconf command.

#prtconf   |grep  -v not


3.With “format” command

By using the format command you can get the physical name as well the logical names of the disks that are connected to your server and also you can check how many harddisk connected to the server by using this command (In Linux we use fdisk command to list all the disk details the same like here in solaris we use the format command)


Note: Press Control+d to exit  the format command without selecting the disk.


If you found this article useful, Kindly Subscribe here  👉🏿 Click this link to Subscribe
Never miss an article Do like my official  FB page 👉🏿Learn Linux in an easier way

Understanding Basic Architecture of a Disk(Solaris)


Before we start the administration parts of Solaris operating system it is must to know the basic architecture disk, Basically, a disk device has both physical component and logical components

The physical components have disk platters and read and write heads

The logical components have disk slices, cylinders, tracks and sectors

Structure of physical disk explanation:

1.The disk storage part is composed of a couple of platters

2.The platters rotate.

3.The head actuator arms move the read and write heads as a unit radially then the read and write heads read and write data on the magnetic surface on both the side of the platters.

Sector = Its a smallest addressable unit on a platter, by default one sector, can hold 512 bytes of data.It can be also known as disk blocks
Tracks = A series of sectors positioned end to end in a circular path.
cylinder=A bulk of tracks.


What are Disk Slices in Solaris?

We know after dividing the disk in to individual partitions we call it as a logical partition or LVM partition or raid based on the partition type we use in RHEL, In Solaris, we call it as a slice, once the disk is divided in to individual partitions it is known as disk slices.

For example, One slice can hold critical file system data and another slice on the same disk holds user related files and many more.

Note: A disk under the Solaris OS is divided in to 8 slices i.e labeled from slice 0 to slice 7

Note: Slice 2 it contains the important data about the whole disk, size of the disk, a total number of cylinders remains available for the storage of files and directories.

A starting cylinder and the ending cylinder define each slice.These  cylinder values say the size of a slice,

Let me show you one example

imagine I have total cylinder 3200 (in human readable format 32 GB)

slice 0 offset cylinder:(0-1500),so total cylinder value for slice 0 is 1501

slice 1 offset cylinder:(1501-2000),so total  499 cylinders

slice 2  offset cylinder:(0-3199),so entire   cylinder value is 3200

offset means starting cylinder

Let us have a look at the below tabular column about the disk slices


Disk slice Naming convention:

Knowing about the Disk slice naming convention is must in order to learn Solaris disk management an eight character string represent the entire name of the slice.These eight character strings include

1.controller number number

3.disk number

4.slice number

Controller number:

It identifies the Host Bus Adapter(HBA) which is responsible to control the communication between the system and disk unit. HBA is nothing but responsible for sending and receiving the commands and data to the device.

All controller numbers are assigned in sequential order such as c0,c1,c2,c3 so on…

Target Number:

Is nothing but it is a  unique hardware address that is assigned to each disk, tape, CDROM.Same as controller target numbers are assigned in sequential order such as t0,t1,t2,t3 and so on…

Disk Number:

This is the special number reflects the number of disks at the target location.The Disk Number is also called as LUN.

Slice number:

The slice number range starts from 0 to 7 i.e s0 to s7.Total eight slices

The below diagram shows the string that represents the full name of the disk slice.

In our next article, I will explain the OS device naming conventions in Solaris.

If you found this article useful, Kindly Subscribe here 👉🏿  Click this link to Subscribe
Never miss an article Do like my official  FB page 👉🏿 Learn Linux in a easier way



Understanding “Network Bonding/Teaming” in Redhat Linux

How to configure Network Bonding/Teaming in Red Hat Linux

As a system admin we would like to avoid the server down by having the redundancy for the “/” filesystem by using the RAID technology(MIRRORING THE DATA), then multiple FC links to SAN technology with the help of Multipathing software and many more.How do you provide the redundancy in network level?As all, we know having multiple network card (NIC) will not provide any redundancy if either  NIC1 or NIC2  failed then it may lead to network downstate.

In RedHat Linux with the help of Bonding/Teaming, we could accomplish the network level redundancy.Once you have configured the bonding with the help of two NIC cards, then any failure occurs on any one of the NIC cards the kernel will automatically detect the failure of NIC  and it works safely without any issues.Bonding could be also used for the load sharing between the two physical Links.

The dig shows how Bonding is working

Let me show now how to configure network bonding in RHEL

Task: Configure Network bonding between eth0 and eth1 with name of bond0

Bonding driver: Linux allows binding of multiple network interfaces in to a single channel  NIC by using kernel module called Bonding
Tips: The behavior of the bonded interface depends upon the mode(mode provides either hot standby or load balancing service.

Make sure you have two physical Ethernet cards available in your Linux server

Step:1 Check the network adapter details
#ifconfig |grep eth

As you can see from the above output we have two Network adapters with the logical name eth0 and eth1.

Step:2 Edit the configuration file for both the ethernet cards as follows
#vi  /etc/sysconfig/network-scripts/ifcfg-eth0

add the following lines inside this file

Do the same for another interface eth1

Step:3 Create a “bond0” configuration file
#vi  /etc/sysconfig/network-scripts/ifcfg-bond0

add the following parameter as shown below

you will not find the /etc/modprobe.conf in RHEL6, so you need to define your bonding option inside the above configuration file(highlighted with yellow box)

We can configure NIC bonding for various purpose, so when you do the configuration you will have to specify the purpose for which you want to use the bonding.Here are the modes available with the bonding

1.balance-rr or 0: Set a  round-robin policy for fault tolerance and load balancing.Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available. or 1: Set an active-backup policy for fault tolerance.Transmissions are received and sent out via the first available bonded slave interface, another bonded slave interface is only used when the active bonded slave interface fails.

3.balance -xor or 2: Sets an exclusive policy for fault tolerance and load balancing.In this method, the interface matches up the incoming request’s MAC address with the MAC address for one of the slave NIC’s.Once the link is established, transmissions are sent out sequentially beginning with the first available interface

4.broadcast or 3: Sets broadcast policy for the fault tolerance, All transmissions are sent on all slave interfaces.

Understanding miimon in network bonding: It specifies(in milliseconds) how often MII link monitoring occurs.This is very much useful when high availability is required because  MII is used to verify that the NIC is active.


To  check that the driver for a particular NIC  supports the MII tool,run the following command
#ethtool  <interface name> |<grep "Link detected"

#ethtool   eth0 |grep "Link detected"

as you can see from the above screenshot driver supports the MII tool.

Step:4 Load the bonding module
#modprobe  bonding

Step:5 Restart the network interface to make the changes update
#service network restart

Step:6  Confirm whether your  configuration is working properly or not by using the following command
#cat /proc/net/bonding/bond0

As you can see from the above screenshot, NIC bonding interfaces are in active state.

Step:7 Verify whether “bond0” interface has come up with IP or not
#ifconfig -a

The above screenshot has confirmed the bonding interface has the IP address and it is in running state.

You can also notice eth1 and eth2  have flag “SLAVE” and for bond0 interface has flag “MASTER”

To verify the current bonding mode, run the following command

#cat  /sys/class/net/bond0/bonding/mode

From the above output, the current mode is balance-rr  or 0

To check the currently configured bonds
#cat  /sys/class/net/bonding_masters

The above screenshot says we have one master bond with the name “bond0”

Note: So from now onwards even if anyone of your NIC adapter failed, the bond0 interface will continue running and provides the uninterrupted service to the clients. The failed interface flag will be changed to “down”  state and after resolving the issue with the failed interface the flag again will change its state to “Running”.

I hope you have enjoyed this article, Kindly subscribe 👉🏿👉🏿 Click this link to Subscribe

Never miss any updates from my blog do like my FB page here 👉🏿👉🏿Learn Linux in an easier way


**********************Thank you**************************************************************************

Understanding SSH and SCP Protocols in Linux Operating System

What is SSH?

SSH is a Secure Shell protocol that lets you to open the remote terminal or shell session on any Unix based server where according to the permission available to the account you logged in to and execute commands.The primary advantage of ssh over other protocols including telnet is that everything you do during the session  will be encrypted so that anyone who might be watching at any point between you and  remote host will see only the unreadable text

Note: SSH stands for Secure Shell.All SSH session is encrypted and it requires authentication.It provides a very safe and secure way of exchanging the commands, configuring the services over remotely.Another important point is when you connect to the remote server using ssh you log in using an account that exists on the remote server.

Note: The port number for SSH protocol is 22

An Overview of how SSH works?

1.When an administrator connects to the remote server using SSH he will be dropped in to a shell session (usually bash), where you can execute commands, it will allow you to use only text-based  interface, whatever command you execute in to your local terminal are sent through an SSH tunnel (with encryption)and then it executed on your server

2.The SSH connection is purely based on the client-server model this means for an SSH connection to be established, the remote server must be active with the ssh daemon(sshd).This daemon will listen for the connections on the specific port(ssh), it authenticates the connection request and allows the connection if the user provides the correct credential details.

3.The client system must have an SSH client software and this software knows how to communicate with the SSH protocol, provides information about the remote host, username to use, credentials that need to passed to authenticate and many more.

How does SSH Authenticate users?

1.Most clients use to authenticate by using the password which is very less secured and not recommended to use, use the SSH keys which is a very secure way to connect.

2.SSH keys are sets of cryptographic keys which can be used for the authentication.Each set contains Public key and Private key.

Public Key: It is made available to everyone, it can be shared with anyone without concern.

Private Key: It must remain confidential to its respective owner

Note: Whatever is encrypted with a public key only be decrypted by its corresponding private keys.

Let me tell you how ssh key-based authentication works at the backend,

If you want to authenticate using SSH keys, the user should have an SSH key pair on their local system, now on the remote server, the public key must be copied to the file within the user’s home directory at ~/.ssh/authorzied_keys.This file contains a list of public keys, one per line, that are authorized to log in to this account.

Now when a user connect to a host, wishing to use the SSH key-based authentication, it will inform the server of this request and tell the server which public key to use,then the server checks its authorized_keys file for the public key,generate a random string and encrypts it using the public key,this encrypted message can be only decrypted with the associated private key.The server then will send this encrypted message to the user to test whether they actually have the associated private key.

Upon receipt of this message, the client will decrypt it using the private key, it then compares both the values if both the values are same then it allows the connection.This is how  SSH key based authentication works.

Now let us see how to connect to the remote server with SSH protocol

Ex:1 Connect to remote server  from local server:

The basic syntax to for this as follows,

#ssh <remote server ip address or host name>

In this example, I use my two  Linux servers for the demonstrations.

Server details:

Server1 IP address: at USA

Server2 IP address: at LONDON

Let us see how to connect to server2 from server1. #ssh

If this is the first time you use the SSH you will see the below messages on your terminal


After giving Yes this will add your server to your list of known hosts(~/.ssh/known_hosts)

Each and every server will have a host key and the above confirmation question is related to verify and save the host key, Now next time when you connect to the server, it can easily verify that it is a trusted known server.After the server authentication is successfully finished it asks for a password.

Note: By default, SSH allows the direct root login, so here you have to give the root user password of the remote server(i.e ip root user)

Now you can execute any commands, can configure services and many more.Here, for example, my task is to create a user account and password on remote server

The above output says the account has been created successfully on the remote server.

Once you have done with your task with the remote server you can leave the session or disconnect by using the exit command

To exit the connection


As you can see from the above output after executing “exit” command the remote server login session gets disconnected and your terminal now changes it to your local server session.

Ex 2: How can I log in as a normal user to a remote server?

In our first example I have explained how to login to remote server as a root user, As you know by default SSH allows the direct root login to remote server, in case if you want to connect to the remote server with non-root user run the following syntax

Note: Check the non-root user account exists or not on the remote server before you start.

Syntax: #ssh non-rootuser@remoteserverip #ssh john@

After giving john user password you will connect it to the remote server terminal session as follows

Now if you want to gain root access you can use the switch user command “su” to switch to multiple user accounts as follows

Now to disconnect the session first you need to log out from the accounts you have connected as follows

How to change the default SSH Port number?

To protect your server from anonymous attack changing the default port number to another any unused port number would help .all users with Linux servers can change the SSH port number from SSH configuration file(the default port number for SSH is 22).

The configuration file for SSH is /etc/ssh/sshd_config

All you need to do is edit this sshd _config file, open the file with your preferred editor, before that it is always good to take a backup of the original file before you made any changes in it.

#cp  /etc/ssh/sshd_config    /etc/sshd_config.original

Open the file with the vi editor

#vi   /etc/ssh/sshd_config

# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $

# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options change a
# default value.

#Port 22 -->default port number used for SSH now change this to your prefered port number
#AddressFamily any
#ListenAddress ::

# Disable legacy (protocol version 1) support in the server for new
# installations. In future the default will change to require explicit
# activation of protocol 1
Protocol 2

# HostKey for protocol version 1
#HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
"/etc/ssh/sshd_config" 137L, 3848C

From the above file the line #port 22 here the # means it tells the server to ignore anything after it on the same line, now we will need to remove that character and put your preferred new port number.

Note: Make sure you are not using the port number which is already in use.If you are unsure check this TCP/IP and UDP Port numbers and its uses

Try to use the port number which is not listed in the above link, here I use 2222 port number.

Note: In firewall also you will need to change SSH port number to the new one.

Now restart the SSH service as follows

After making any changes in the default configuration file you will need to restart the respective service to make the changes come in to we have to restart the ssh service

#service sshd restart

From now onwards SSH will listen to the port number you have specified.

Understanding SCP Protocol in Unix/Linux operating system:

Scp stands for Secure Copy used to send files from Local server to remote server(Uploading) and copying files from remote server to local server(uploading) with securely, secured means all the data while transferring through the network  is encrypted.

SCP is installed by default on all Linux distributions as a part of OpenSSH package

Note: SSH is used to connect to the remote server with text-based interface

SCP- used to transfer files between the different servers

Scp it uses the  SSH port number 22 to establish the connectivity between the server

Ex:1 How to transfer  a file from Local server to remote server

For this example, the syntax would look like this


Server1: server)

Server2: server)

Now  am going to transfer a file from local server to remote server

#scp  /documents root@

Note:/documents is the local server file to be transfer

/tmp is the remote server destination directory path

Once the authentication is successful the file will transfer to the remote server destination path, you will see the percentage as 100 which indicates the entire data has been successfully transferred to the remote server.

Now to verify go to the remote server ( /tmp directory and list whether the file /docments is successfully saved.

#cd  /tmp

#ls -t

Note: -t option is to list the latest modified or create file to display  in the first

The above screenshot has confirmed the file has been successfully saved under /tmp directory of the remote server.

Ex:2 How to transfer a directory and all its contents from local server to the remote server?

To copy the entire directory we need to use the option -r  with the scp command i.e recursive which will select the entire directory contents.


#scp  -v  -r  <Local server dir>   <user@remote server ip>:<remote server destination dir path>

you can also use the -v verbose option to view the detail output on your screen.

From Server1 ( am going to transfer /mydatabase directory to the remote server /myfolder path

As you can see from the above output mydatabase directory is have some files and subdirectories.

#scp   /mydatabase root@

Note: If you forget to mention -r option while transferring the entire directory you will get the error message it is not a regular file, check the above screenshot.

Always use  -r while transferring the entire directory to the remote server.

After giving the correct password scp will transfer the /mydatabase directory to the remote server directory /myfolder

Now go to the remote server /myfolder path and confirm whether the directory /mydatabase is successfully transferred.

#cd  /myfolder




Note: To copy the files from remote server to the local server path you would use the same syntax in reverse as follows
#scp  username@remoteserverip:<remote serverfile>   <local server path>

I hope now you have understood the SSH and SCP protocols their uses in production environment

Never miss an article from this blog, Kindly do like my FB page Learn Linux in a easier way
If you found this article useful, Kindly Subscribe here 👉👉🏿Click this link to Subscribe

Managing User Account in Linux


Managing user account is an important task for the system administrators on their daily task, in this article  I will explain to you how to administrate the user accounts and also we will see the configuration files that are needed for maintaining the user accounts.Basically all the users on the system are identified by username and the user id(UID) number,Humans can recognize the user by its username but the operating system uses the UID number to identify the users in your system, when you create a user account by default a UID will get generate with an account.Each and every user will have the unique UID number.

Special Users

While installing the operating system some default user accounts will get created in your system, these accounts are normally called as the default system accounts.These special users will have different UID numbers.


Every user on your system is also a member of one or more groups.Instead of setup individual permissions for each and every user, adding a user to a group and then assigning the permission is the easiest way of setting permissions for different users.Like UID groups will have GID (group identification number).

System default configuration files that store the user account information

When you create a user or group all the default information will get an update on some configuration files, there are three important configuration files available to store all the user and group updated information.As you know all the configuration files come under the directory /etc inside this we will have passwd,shadow&group files.
This /etc/passwd file stores the User essential information which is must require during the login.Total seven fields are there in this file, By default, the passwd file will look like below entry format only.
b)user ID
d)Group ID(GID)
f)Default Home Directory
g)Login Shell
Each field is separated by a colon(:)
Let me explain the fields one by one
Username: User Id when users logs in to the server.Maximum allowed characters for the username is in between 1 and 32.
Password: An character indicates the encrypted password is stored in the /etc/shadow file.
User ID(UID): The UID number for the root user is “0”, The UID 1-499 is reserved for the default system accounts, above 500 will use it for the secondary user accounts which we create it manually by useradd command.
Group ID(GID): It shows the Group ID that is stored in /etc/group file.
Home Directory: The default home directory for non-root user logins, if this directory does not exist then the user directory become / only, login problem might occur if /home is not available while login.
Login Shell: This indicates The default shell to be used when the user login to the system.
Let me show you the screenshot of /etc/passwd file how the  fields are separated,

Check the  file permission for /etc/passwd

#ls  -l /etc/passwd
As this file contains sensitive user information The permission for other user is set to read-only so that users cant modify this file,


This file holds the user’s encrypted password information, once you have created the password it would be encrypted and stored inside this file along with your login name.Only the root user can read this file, other users cannot read this file.Let us have a look at this file
#cat  /etc/shadow

1.Username: This is your login name
2.Password: Your encrypted password information, The $id is the algorithm used on GNU/Linux as follows
a.$1$ is MD5Algorithm
b.$2a$ is Blowfish Algorithm
c.$5$ is SHA-256 Algorithm
3.The last password changed: Days since the last password was changed.
4.Minimum: The Number of days left before the user is allowed to change his password.
5.Maximum: The no of days the password is valid
6.Warning: The number of days before password is set to expire that users are warned to change his password.
Note: Last two fields separated by colon are mentioned below
7.Inactive: The number of days after  password expires that account is disabled
8.Expire: Days since the account is disabled.
Note: The password filed which starts with an exclamation mark (!) means that the password is locked if it starts without ! means account is unlocked.
Let me show you this with one example…
When the account is in locked state

From the above output, you can see the encrypted password starts with the ! mark which indicates  the account is in the locked state
After the account is unlocked

From the above output, the encrypted password starts without the ! mark coz the account has been unlocked.

3./etc/group file

It holds the user groups information like which user belongs to which group, As like the above files all the entries are separated by colon(:)

1.Group name: It indicates the group name
2.Password: By default password is not used hence it is empty, if the password is there for the group then it can store the encrypted password, If you need a group with privileged access then create a password for a group.
3.GroupID(GID): All users must be assigned a group ID when you check the /etc/passwd file you will find the group associated with each account.
4.Group List: It holds the usernames who all are members of the group, all the names are separated by commas.

To Check the group informations

#cat /etc/group


#less /etc/group


more /etc/group

To find out the Groups the user is added

#groups  <user name>
#groups  vasanth

Here the user Vasanth belongs to the system groups ntp and adm.

How to create a user account?

Creating a user to Linux box is very easy, however, this operation is allowed to be performed by the root user only.In two ways you can add a user to Linux box.
1)By editing the /etc/passwd file(i.e,Manually adding all the fields like UID,GID,LOGINNAME,COMMENT,SHELL)
2)By using the “useradd” command which creates the account automatically as long as you give the correct details

Syntax: To create a user account by using the “useradd” command

#useradd    -u <uid>    -g <gid>    -d <home_directory>  -s <login_shell>   -c <comment>    <login_name>


s —–>To define the user Login shell
c —–> To leave a comment for a user account
Now let us add a user account by using this syntax
#useradd -u 1500  -g  10  -d  /home/nirmal  -s  /bin/bash  -c "Site Admin"  nirmal

After adding the account successfully, all the information will get automatically updated in the /etc/passwd file.
#cat /etc/passwd

From the above output, all the fields successfully updated in /etc/passwd file.

Now if you want to confirm to which group the user “hema” was added run the following command,

#id  <user name>

#id  hema

The group name for the id 10 is “wheel”.If you have your own group you can also mention that with the useradd command, in this example i have used the default sys group id 10 (wheel).
Note: Sometimes  the useradd command might fail under the following conditions

1.The UID you specify has already taken

2.The GID you mention does not exit

3.The comment conatins specail charcters syuch as (!) and (/)

4.The shell you specify doesnot exist.


#useradd   <user name>
In this method, the system uses the defaults to create the user account and update the same in /etc/passwd file,
#useradd  jeya

Now check the account details in /etc/passwd file

#cat /etc/passwd  |grep jeya

Note: The root UID and GID is always 0, and default group for root is always 0.
Note: Check the second field that appears with “x” character which means its a password filed(“x” appears coz we are using the process called password shadowing) I will explain you about the password shadowing in our upcoming posts.
Note: In /etc/shadow if you see exclamation (!!)  in the password field it indicates no password assigned to the user.


Since the user Vasanth has the password you will see the encrypted password line, Now check the other users Hema and jeya you can see the !! symbol which says both the users don’t have the password.
As I said useradd <username> will take the defaults to create the user account, if you would like to know what default values would be assigned to a user when creating a user account with useradd command, here you go..
In Linux, there are two configuration files available which hold the default values to be assigned to a user with user add command.
1)/etc/default/useradd file

#cat /etc/default/useradd

you can also use the following command to fetch the same details
#useradd -D


This file conatin the values like UID,GID,expiry information,password encryption method and many more informations
#cat /etc/login.defs

You can also change the default values with the useradd command, Let me show you some couple of example on how to change the default values  of  useradd command

Change the default values of useradd command?

In two ways you can  change the default values of the useradd command
1.Editing the /etc/default/useradd file manually
2.With useraddd command by using some options

Now am going to change the default  home directory for all new users

#useradd -D

From the above output all the users will use /home as their default home directory, Now let us change this default home directory,
#userad -D -b /var/users

Now check whether it is updated in the configuration file
#useradd -D


#useradd -D |grep HOME

The above output shows,  from now onwards all the new users will use /var/users as their default home directory

Change the default Login Shell

By default all the users will use the /bin/bash as their default login shell, now am going to change from bash to bourne shell i.e, sh
#useradd -D -s /bin/sh

#useradd -D

From the output we can see the default shell from now onwards all the new users will use sh as their login shell
Once you have created a user account the next step is to set a password to the account we have a command passwd by using this we can set the password for the account.

Ex:1 To set a password to a account

#passwd  <user name>

#cat /etc/shadow  |grep hema

From the above screenshot, you will not see the encrypted lines in the password filed as the user is not having the password yet and the (!!) indicates the account is not yet set with the password(i.e, No password)
#passwd Hema

New password:******

After creating the password it should get update as an encrypted format in the /etc/shadow file
#cat /etc/shadow |grep Hema

As you can see from the output, before you create a password for the account in /etc/shadow file nothing is showing in the password field you will see only !! (which indicates no password NP), after assigning the password you can see the encrypted line in the password filed.

Note: Even for the account lock it shows the same !! mark

Ex:2 To check the details or status of an account password

With passwd command you have to use the option -S to fetch the status of the account password,


#passwd  -S  <username>

S --> To fetch the status of the user password

#passwd -S  hema

The result  will give you seven fields, each one with different status
1.The first field is USER LOGIN NAME
2.The second field says whether the account is in locked state(LK) or no password(NP)
3.The third field shows the date of the last password change
4.The Fourth field shows the Minimum age for the password
5.The fifth field shows the maximum age for the password
6.The sixth field shows the warning period for the password
7.The seventh field shows the inactivity period for the password.

Ex:3 To Lock a specified account


#passwd   -l   <username>

l -->indicates to lock the account password

#cat /etc/shadow  |grep hema

Now lock the user account as below
#passwd -l  hema

Now check the shadow file for the changes,
#cat /etc/shadow  |grep hema

Ex:4  To Unlock the account

#passwd  -u  <username>
#passwd  -u hema

#cat /etc/shadow  |grep hema

From the output you can see once the account has brought it back to unlock state the !! mark removed before the $ sign, so as an admin you should know the meaning for !!, NP, PS in the shadow file.
I will show you one small example of how the status is getting updating before and after the account is locked and unlocked

PS –>Account has password and it is in active state
LK –> Account is Unlocked

Ex:5 To set Minimum number of days Before the password change

The user cant change or modify his/her password till the minimum allowed days gets completed,
if I assign 6 days as a minimum password age for the user Vasanth then the user Vasanth must have to use the current password for at least 6 days and he is not allowed to change the password within these 6 days.
#passwd  -n  <days>  <username>
#passwd  -n   6  vasanth

Now check the password status for the user Vasanth,
#passwd  -S vasanth

From the above output now the minimum days required to change the password is changed to 6 days

Ex:6  Set the Maximum number of days before the password change

Is nothing but telling the user how many days the user can use the current password, means within this allowed maximum days the user must have to change his/her password, once the maximum days get over the account will automatically Lock.
#passwd  -x <days> <username>
#passwd  -S hema

From the above screenshot the max number of days allowed before the password change is 7 days for the user Hema, Let me modify this by using the following command
#passwd  -x 10 hema

Now check the status
#passwd  -S hema

Ex:7 How to Set warning days before the password expires

If you set the warning days for a user then he/she will receive an alert message to change the password 12 days before the account expiry date.
#passwd  -w  <warning days>  <username>
#passwd -w 12  hema

Now check the status whether it is updated on the password management file

Ex:8 How to DELETE the password for a user account?

In two ways you can perform this, one is by editing the /etc/shadow file,i.e, removing the encrypted line for the user and the second one is its quite easy way to execute by using the “passwd” command with the “-d “option you can remove the password.
#passwd -d  <username>
Let me remove the password for the user hema, remember after removing the pasword check the password staus in /etc/shadow fiile
#passwd  -S hema

Now delete the password by using the following command
#passwd -d Hema

#passwd -S hema


#cat  /etc/shadow  |grep hema

From the above screenshot, you will see the password status has been updated on all the password management files.
In our next tutorial, i will explain you how to control the password management by using the “chage” utility.
If you found this article useful, Please do Subscribe and share it with your friends.Thank you🙂🙂


[rainmaker_form id=”235″]

Importance of “lsof” command in Linux

lsof stands for List Of Openfiles is a powerful  command  to analyze which files are open by the process .this command really helps the system administrators to keep track of the process usage, When  you are trying to unmount a filesystem or device and if it shows the device is busy  means the files are being used, with the help of the lsof command now we can easily identify the files which are in use.

What do we get from the lsof output?

With lsof you can use some options to get more detail output about the open files by the process, Below are the details you can get it after executing the command

1.Process in the system


3.Network service

4.Regular file


6.Network file (NFS, Internet socket, Unix domain socket)

Note: By default in Unix/Linux this command comes with pre-installed. When you are executing lsof and if it is showing error lsof: command not found, it could be the command lsof is not in your PATH, check with /bin and /sbin directory for this command if the command is not listed in these directories then you have to install it manually.

Now let us see some of the examples with the lsof command in detail,

Ex:1 To list all open files by all the process


Without any option, this will list you all opened files and process.

From the above output, you can see the details of all open files, FD column stands for File descriptor and it shows some values

CWD Current working directory

rtd Root directory

txt Program text code

mem Memory

FD column numbers like 10u is a file descriptor and it is followed by u,r,w modes

r means read access

w means write access

u means both read and write access.

TYPE –file types and  identity

DIR– Directory

REG– Regular file

CHR-Character special  file

FIFO-First In First Out

Ex:2 How to get the details of all process which has opened file?

#lsof   /hello.txt

In this example I have opened the file /hello.txt for live monitoring so I use tail -f /hello.txt to let the file in open stream, Now check with the lsof to see which process is using the file /hello.txt

As you can see from the above output the file /hello.txt is opened by the process “tail”

Ex:3 How to list all opened files by a user?

by adding  -u option with the lsof  you can get the files which all are opened by the user

#lsof  -u Vasanth

From the above output, you can see the files opened by the user Vasanth (marked with square red box)

You can also add multiple users by providing comma between the username

#lsof -u anis, Nirmal, Marshall

Ex:4 To list all files opened by  a particular command

#lsof  -c  <command>

Let me put a file in buffering mode by using the tail -f  /cts then after that  run the lsof to view files opened by the tail command

#lsof -c  tail

From the output, you can see the files opened by the tail command from the path /home/Vasanth/data file and then from the root directory path /cts file and much more…

To list all files opened by more than one  commands use the below syntax

#lsof -c firefox,top

Ex:5 To list files opened by a particular User and command?

Here you can also combine the options -u and -c together

#lsof  -u Vasanth  -c firefox

From the output, you can see the user opened files as well the files which all are opened by the command firefox.

Ex:6 How to list all open files by a process using the PID number

Its nothing just add the option -p with the lsof command will list the files opened by the process with PID

First get the PID number of the running program by using top or ps command


Once you got the PID use the same with the lsof command.Here I use the PID 18

#lsof -p  <PID>

From the above output, the PID has opened some files from the path / and /proc and also you can see the user who is running that program(here root), the command name and what type of files the PID is using and much more.

Ex:7  To list all network connection

#lsof  -i

here I means internet socket i.e TCP and UDP sockets)

From the above screenshot, you can see the port status whether it is listening or non -listening, the type of protocol connected, the node and many more details you can find it.

If you want to get all the TCP open socket connection details

#lsof  -i tcp

Ex:8 How to get which process is using a port?

you can also use the netstat command  for this

#lsof  -i:22

you can also use the service name instead of the port number

#lsof -i:ssh

I hope you have understood the need of using the lsof command in Unix/Linux Operating system.

If you found this article useful, Kindly Subscribe here 👉🏿👉🏿Click here to Subscribe




How to Create an Extended Partition in Linux

How to create an Extended Partition in Linux

In our previous tutorial we have learned about the Linux Disk Management and also have learned the procedures that have to be followed before creating a partition, So here in this tutorial, we are going to learn about the extended partition and how to create the extended partition.

We know that in MBR partition scheme it allows us to create a maximum of 4 partitions only, in order to create more partition we have to choose the extended partition, from this extended partition we can create more logical partitions.(max 15 Logical partitions we can create).

Step:1 First Create three primary partitions, before that check the partition layout information by using the following command.

#fdisk  -l


From the above about its confirmed, the disk /dev/sdb is having  only one partition (/dev/sdb1)

Step:2 Create the second partition:

#fdisk  /dev/sdb


After giving the partition number, give the first available sector value or you can accept the default sector value by not entering any values(press enter), Give the last cylinder value the size of the partition in MB, GB, KB format and press enter

Now to save this partition table press “w”

create two more partitions in the same way

To check the partition details run the following command

#fdisk  -l  or u can use “p” option from the fdisk menu

The above output shows now the disk /dev/sdb is having three partitions

Step:3 Create  the fourth partition and then after that try create another primary partition

We have created four primary partitions, Now this will not allow us to create any more primary partitions on this disk since MBR partition scheme doesn’t allow more than four primary partitions.

If you need more partition now you must have to delete one Primary partition from the disk, then we can create one extended partition can be used to create more logical partitions.

When you try to create another primary partition the above output you will find an error message saying that you must have to delete a primary partition in order to create an extended partition on this.

Let me delete the fourth primary partition to make it available for creating the extended partition.

Type “d” option to delete the partition, after that give the partition number you want to delete

Press “w” to save the partition changes

Now Let us create one extended partition so that we can create the logical partitions.Here for extended we have to give the maximum disk space , I  am going to assign  give 1G size for this

Now check whether the new;y created extended partition updated

#fdisk   -l

From the above output we have created an extended partition /dev/sdb4 with the size 1GB, Now we could create logical partition up to 1GB.Now let us create three logical partitions on this, first partition with the size 500MB, Second partition with 100MB size.

From the output, the first logical partition /dev/sdb5 created successfully.

Do the same for the remaining two partitions

Now check the partition details by using the fdisk command

#fdisk  -l

From the above output, you can see the extended partition /dev/sdb4 with the size 1GB, from the extended we have created two logical partitions /dev/sdb5 /dev/sdb6.

Step:4  Now we have to create a filesystem on these partitions.

#mkfs    -t   ext3   /dev/sdb5

#mkfs  -t ext3   /dev/sdb6

Note: You cannot create a file system on the extended partition(/dev/sdb4)  because it cannot be used to hold the data, Logical partitions are used to store the data, so we have to format the logical partitions with the supported filesystem type.

Now to make it visible these newly created logical partitions to the users we have to mount it on some mount point directory structure.

Step:5 To mount a filsystem:


#mount  <filesystem>   <mountpoint directory>

Let me mount all the logical parttions to some mount point directory

#mkdir  /facebook

#mkdir  /whatsapp

#mount /dev/sdb5   /facebook

Repeat the same for the remaining logical partitions

#mount  /dev/sdb6  /whatsapp

Step:6 Now to view the mounted filesystem details, run the command

#df  -h

I hope you  have understood the Concepts of  Partitions in LINUX

If you miss my previous tutorial(Linux disk management) here is the link Linux Partition

If you found this article useful, Kindly subscribe 👉🏿👉🏿 Click this link to Subscribe

How to copy a file/directory in Unix?

Copy Command in Unix(cp command)

How do i copy files and directories under UNIX like operating systems?

You need to use the “cp” command to copies files and directories under the UNIX like operating systems

Syntax:To copy a file to a directory

#cp    <options>   <source >    <destination>


-r -recursive (use this option only when you need to remove the entire directory and all its contents)

Note: I have mentioned some of the important options here ,you can refer the man page to know more about the available options

Ex:1 copy a file called “backup.txt” in to another directory called  /tmp.

First create a file with some contents

#echo  “This is my first line” >/backup.txt

Check the contents

#cat  /backup.txt

Sample output: This is my first line


Now copy this file to the /tmp directory

#cp  /backup.txt   /tmp

Successfully  the file copied to the directory /tmp

To confirm go the /tmp directory and verify

#cd  /tmp


Sample output: backup.txt



Ex:2 To copy multiple files in to a directory


#cp   <source file>   <source file>   <destination directory>

Leave space between the files to copy multiple files to a directory

Let me create some couple of files with some contents :

#echo  “Buy lots of eggs” >/mydoc

#echo “Buy some chicken” >/mydata

#echo “Buy some  cake” >/myfile

we have successfully created three files with some contents .

Now copy all these files to a directory called /vasanth

#mkdir  /vasanth

#cp    -v   /mydoc   /mydata   /myfile     /vasanth

Here i have applied the verbose option (its not mandatory )

Sample output:mydoc -> /vasanth/mydoc `mydata -> `/vasanth/mydata` `myfile -> `/vasanth/myfile`

Go to the  /vasanth directory and confirm

#cd   /vasanth


Sample output: mydata mydoc myfile

Ex:3 To copy all directory and all its content to another directory:

A directory and all its content can be copied from source to destination with the recursive option -r

It allows directories including all of their contents to be copied:


#cp   <option>   <source>   <destination>

Let me create a directory and add some files inside

#mkdir   /testdir

#cd  /testdir

#touch  f1  f2  f3  f4

#mkdir  d1  d2  d3  d4

#cd  /

Now the directory /testdir is having some couple of files and some sub directories inside

#mkdir    /output

This is the destination directory

#cp    -rvf    /testdir     /output

Here i have applied verbose and force option since while copying each and every file it will ask the confirmation from the user (Difficult  for the system administrators if the directory contains some 100 or 500 files), i have granted yes to all by using the -f option.

Go to the destination directory and confirm whether all the files and sub directories copied

#cd  /output

#ls   -l

Sample Output:  drwxr-xr-x  6 root  root  4096   dec 12  13:37  testdir


Ex:4  How to select and copy  all the files from a directory to another directory

Let us assume if a directory contains some files and sub directories if i want to select only the files from that folder you have to use a special wildcard character “*”(Asterisk),this will select only the files from the directory and skips all the directories

Create a directory with some files and sub directories inside

#mkdir   /nirmal

#cd  /nirmal

#touch   f1 f2  f3  f4  f5

#mkdir  d1  d2  d3  d4  d4

#cd   /

Now select all the files and copy

#cp  -v    /nirmal/*     /tmp

Ex:5  To avoid overwriting the existing file

Copy only when the destination file doesn’t exist

#cp  -n   <source>  <destination>

Let me create a file :

#mkdir  /test

#cd   /test

#touch result.txt


Now create a file in / directory with the same name

#touch  /result.txt

#echo “This is my first line”>/test/result.txt

#echo “This is my second line” > /result.txt

Read  both the file  contents :

#cat   /test/result.txt

#cat /result.txt

#cp  -n   /result.txt    /test/result.txt

#cat  /test/result.txt

Sample output: its a new content

Here as we can see that after copying file result.txt in test directory the original file result.txt in test folder is not changed since of “-n” option.

Ex:6  To confirm before overwriting


#cp   -i   <source>  <destination>

#cp  -i  /result.txt   /test/result.txt

Sample output: cp:overwrite /test/result.txt?y

#cat /test/result.txt

Sample output:This is my SECOND line

Ex:7  To make a backup up of files,if copying file has the same name:

#cat    /result.txt

Sample output: Hello there

#cat   /test/result.txt

Sample output: Testing line 

use the option –backup before the destination file contents gets overwrite with the new contents

#cp  –backup  /result.txt    /test/result.txt

Now go to the test directory path and check you will see two files

#cd  /test


Sample output: result.txt   result.txt~

Now check the destination file content

#cd  /test

#cat  result.txt

Sample output: Hello there

#cat    result.txt~

Sample output: Testing line

As you can see here that when result.txt is copied from the source to destination in test directory,a backup of the original file result.txt in the same directory is made as result.txt~ and new file is copied in the result.txt as usual


Ex 8:  How to copy only when the source file is newer

In this let me show you how to copy only the newest files from the source location to the destination directory location,to do that we have option -u.

Let me create some files

#touch  /a.txt  /b.txt   /c.txt  /d.txt   /e.txt

Now i have some files in the / directory with 0 bytes size

#ls  -lt


let me create a directory

#mkdir  /etc/vasanth

inside /etc/ directory i have created another subdirectory vasanth

#cp   *.txt   /etc/vasanth

This will select and copy  all the files with the extension .txt

#cd  /etc/vasanth

#ls  -l

#touch /newdoc

#ls  -l   *.txt


Now  when we use -u option combine with -v option to see what is being done,cp command will only copy a file which is newer from destination directory.

#cp  -uv  *.txt   /etc/vasanth

Sample output:newdoc.txt ->  /etc/vasanth/newdoc.txt`

As the result, we see that only newdoc.txt is copied in to the /etc/vasanth directory

If you found this article useful, Kindly Subscribe here👉🏿👉🏿Click this link to Subscribe