Integrating LVM with Hadoop and providing Elasticity to DataNode Storage.

Zoro
Nov 4, 2020

Let’s say you went short on storage in your Hadoop cluster and want to increase your DataNode Storage without hampering your data stored in the DataNodes, what would you do? Well if you know that’s great, but let me show some simple steps by which you can very easily achieve elasticity in your Hadoop Cluster.

By reading the title itself you might have got the trick. Yes, in order to achieve elasticity in Hadoop we need to integrate LVM (Logical Volume Manager) with it.

LVM From PV

First, we have to create a LVM then mount it to the folder in the “hdfs-site.xml” file of Hadoop.

Steps to create a Logical Volume and mount it :-

Step -1

Cmd:-

~ pvcreate /dev/(disk1 name)

~ pvcreate /dev/(disk2 name)

Step -2

Cmd:- vgcreate (name of vg) /dev/(disk2 name) /dev/(disk2 name)

Step -3

Cmd:- lvcreate --size (size in GB) --name (name of lv) (name of vg)

Step -4

Format the lvm.

Cmd:- mkfs.ext4 /dev/(vg name)/(lv name)

Step -5

~ Mount the LVM to the folder in hdfs-site.xml file.

Cmd:- mount /dev/(vg name)/(lv name) /(hdfs file folder name)

Step -6

~ To extend lv size ..

Cmd:- lvextend --size (size you want to extend) /dev/(vg name) /(lv name)

Finally after following this few simple steps we can achieve elasticity in our Hadoop Cluster.

*Thanks to my amazing team members :-

~ Ashutosh Kodgire

~ Akshit Dharmendrakumar Modi

~ Janhavi Jain

~ Gursimar Singh

  • Special Thanks to Vimal Sir. You are an inspiration ! ✨

--

--