Setting up ephemeral disks as JBOD on EC2

I recently had to setup the instance store blocks on some EC2 instances in a JBOD configuration, and wanted to make note of how I did this for future reference. In this example I was using m3.2xlarge instances with 2x80GB SSD instance stores on it. They are attached at /dev/xvdb and /dev/xvdc. I’m using LVM to do all of this, and the instances are running RHEL 6.5

The first ephemeral disk is usually mounted on launch, so unmount it

1 $ umount /dev/xvdb

Create physical volumes with pvcreate

1 $ pvcreate /dev/xvdb
2   Physical volume "/dev/xvdb" successfully created
3 $ pvcreate /dev/xvdc
4   Physical volume "/dev/xvdc" successfully created

Create volume group with vgcreate

1 $ vgcreate vg0 /dev/xvd[bc]
2   Volume group "vg0" successfully created

Create logical volume that uses all available space on the volume group using lvcreate

1 $ lvcreate -l 100%FREE -n lvol1 vg0
2   Logical volume "lvol1" created

Make filessytem on this logical volume (I chose ext3)

 1 $ mkfs.ext3 /dev/vg0/lvol1
 2 mke2fs 1.41.12 (17-May-2010)
 3 Filesystem label=
 4 OS type: Linux
 5 Block size=4096 (log=2)
 6 Fragment size=4096 (log=2)
 7 Stride=0 blocks, Stripe width=0 blocks
 8 9830400 inodes, 39315456 blocks
 9 1965772 blocks (5.00%) reserved for the super user
10 First data block=0
11 Maximum filesystem blocks=4294967296
12 1200 block groups
13 32768 blocks per group, 32768 fragments per group
14 8192 inodes per group
15 Superblock backups stored on blocks:
16 	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
17 	4096000, 7962624, 11239424, 20480000, 23887872
18 
19 Writing inode tables: done
20 Creating journal (32768 blocks): done
21 Writing superblocks and filesystem accounting information: done
22 
23 This filesystem will be automatically checked every 35 mounts or
24 180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount the disk and check the available disk space

1 $ mount /dev/vg0/lvol1 /mnt
2 $ df -h
3 Filesystem            Size  Used Avail Use% Mounted on
4 /dev/xvda1            9.9G  2.4G  7.1G  25% /
5 tmpfs                  15G     0   15G   0% /dev/shm
6 /dev/mapper/vg0-lvol1
7                       148G  188M  140G   1% /mnt

Party.