I have an Amazon EC2 instance of type t2.micro running with 16GB of disk space. Recently I needed some more space for some works for which 16GB was not enough. So I decided to increase the disk space from 16 to 64GB.
Set New EC2 Volume Size from Dashboard
It is a two step process. First I needed to increase the volume space for the EC2 instance from the EC2 dashboard. If we click on the
Volumes link from the left menu on EC2 dashboard, we will see all the volumes listed that is being used by our EC2 instance.
We can select the volume we want to expand using the checkbox on the left of the volume and click
This will open up a window to select new volume size and other settings.
We can set any size we wish to have as new size of the EC2 volume. I wanted to have the new volume of 64GB. After we click the
Modify button, we will see a success message saying that our request for modifying the volume size has been submitted. It may take 3 to 5 minutes to take action. It will show up as yellow
in-use in the state column of the volume dashboard. When it turns green, that means the volume of our EC2 drive has been successfully expanded.
Now we are done with the dashboard part. Now to complete the expand process, we have to follow the second steps from withing EC2 instance as it has to done from OS. I am using Ubuntu OS for my EC2 instance. I login to my EC2 instance using ssh.
Expand Disk Volume
Lets have a look at our available disk spaces in the EC2 instance just to have an overview of how it looks. We will use
df -H command for this purpose:
$ df -H Filesystem Size Used Avail Use% Mounted on udev 485M 0 485M 0% /dev tmpfs 104M 754k 103M 1% /run /dev/xvda1 17G 17G 0G 100% / tmpfs 516M 0 516M 0% /dev/shm tmpfs 5.3M 0 5.3M 0% /run/lock tmpfs 516M 0 516M 0% /sys/fs/cgroup tmpfs 104M 0 104M 0% /run/user/1000
So, I had a 16GB volume and 100% of it was used. Looks like a good time to expand the volume 🙂
Next, we will run the
lsblk command to see the available storage blocks in our EC2 instance:
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 16G 0 disk └─xvda1 202:1 0 16G 0 part /
The above output shows that we have only one disk named
xvda along with a single partition in it called
xvda1. As we can see, our expand operation done from the EC2 dashboard has not been applied to our EC2 instance disk. That is because we are not done with the expansion yet. Next, we will expand the partition to the available size of the disk using the
$ sudo growpart /dev/xvda 1
We are almost done. Now we will expand the file system to use the all available disk spaces in the partition. We will use the
resize2fs command for this.
$ sudo resize2fs /dev/xvda1
And, we are done!
Now, lets check our disk partition and usage details to make sure our changes were applied:
$ df -H Filesystem Size Used Avail Use% Mounted on udev 485M 0 485M 0% /dev tmpfs 104M 754k 103M 1% /run /dev/xvda1 68G 17G 49G 25% / tmpfs 516M 0 516M 0% /dev/shm tmpfs 5.3M 0 5.3M 0% /run/lock tmpfs 516M 0 516M 0% /sys/fs/cgroup tmpfs 104M 0 104M 0% /run/user/1000
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 64G 0 disk └─xvda1 202:1 0 64G 0 part /
Everything worked properly and looks fine.