How to migrate from t1.micro to t2.micro

PV から HVM への移行

t2 instances has been released.
t2 are so attractive ( both of spec. and cost ) for me, so I tried to migrate from t1.

Yesterday, I tried to work, and finished to migrate successfully, I'll report the migration steps.

T2 instance

AWS annouced "A new low end instances for Amazon EC2".

Introducing T2, the New Low-Cost, General Purpose Instance Type for Amazon EC2

t2 instances have "burstable performance" feature,
but, even in a simple spec comparison, it's quite attractive than the same class of t1 instances.

T1 Instances

Model vCPU Memory(GB) I/O Price/Hour
t1.micro 1 0.613 Very Low $0.026
m1.small 1 1.7 Low $0.061
m1.medium 1 3.7 Moderate $0.122

T2 Instances

Model vCPU Memory(GB) I/O Price/Hour
t2.micro 1 1.0 Moderate $0.020
t2.small 1 2.0 Moderate $0.040
t2.medium 2 4.0 Moderate $0.080

*at the time of writing ( 2014/7/3 )
*Tokyo region's prices

Especially t2.micro is

  • Memory increased 60%
  • I/O has been improved
  • Price falls

*Photo(hand-signal) : Pakutaso


Normally, we can launch a new EC2 instance from AMI.
*we can create AMI from snapshot which is created from existent instance.

But, this seems to be used only when the virtualization type is the same.

Now, we can't migrate from "PV (paravirtual) type" to "HVM (Hardware-assited VM) type" in aws console.
*ti.micro is PV type.
*t2.micro is HVM type only.

*I tried to migrate from AWS console, but couldn't select t2.instance ( To enable this instance type, return to the previous step and select an AMI that supports HVM virtualization. ) 

But, I don't want to rebuild.

I thought if boot area and boot loader is set properly, it should be possible to migrate from PV type to HVM type.

Investigate the migration steps

I tried to explore the steps of the above ideas.

Initially, I searched the proper steps of copying the boot loader and boot area from t2 to t1,
finally found the following forum. 

Convert CentOS PV to HVM

not only from t1 to t2, I think it's enable to migrate from PV to HVM.

There is a description that it is OK on Amazon/CentOS/RedHat Linux .

Actually I migrated on Amazon Linux.

I feel enough in the above link, but as an internal memorandum, I summarized it.


Overview of the migration is shown in the figure below.

migration steps


  • Use Copy of Instance (Snapshot). Don't use the production instance.
    If you fail to work, there is a possibility to destroy instance.
  • Use the same Availability Zone (of Volume and Instance).
    If you mix the Availability Zone, increases the rework ( experience stories ).

Install grub on PV

Install grulb on original t1.micro (original) instance.

$ sudo yum install grub

Create snapshot of original

  1. Login aws management console, and move to "EC2".
  2. Click [ ELASTIC BLOCK STORE ] > [ Volumes ] menu.
  3. Right-click the volume of original instance, then, click [ Create Snapshot ] on context menu.
    Create Snapshot

    *Create snapshot from volume of t1.micro.

  4. In "Create Shapshot" window, enter the appropriate name and description, then click [ Create ] button.

Create volume of original

Create a volume of original from snapshot of original (created in the above steps).

  1. Click [ ELASTIC BLOCK STORE ] > [ Snapshots ] menu.
  2. Right-click the snapshot of original instance (created in the above), then click [ Create Volume ] on context menu.
  3. In "Create Volume" window, enter appropriate values, then click [ Create ] button.
    Specify appropriate Availability Zone. others are OK as the default value.

Create a New volume for HVM

Create a Volume for t2.micro ( destination )

  1. Click [ ELASTIC BLOCK STORE ] > [ Volumes ] menu.
  2. Click [ Create Volume ] button.
  3. In "Create Volume" window, enter appropriate values, then click [ Create ] button.
    Create Volume

    *Now we can choose SSD(General Purpose) Type. If you are concerned about the reliability of SSD, or const-sensitive, select Magnetic.

    *Specify bigger size than original.

    *Unify the "Availability Zone" setting.

Launch instance for working

Operation for orignal/destination volume will be done Using the instance for working (working instance).
Create and lanuch a ec2 instance for working.

Attach the original / destination volumes

Attach the original and destination volumes to working instance.
It's possible to attach volume to running instance.
If you attach a volume before boot, there is a possibility that original/destination volumes is mounted as Linux System volume, so, I think it's better to attach them after boot.

  1. Click [ ELASTIC BLOCK STORE ] > [ Volumes ].
  2. Right-click original volume, then [ Attach Volume ] on context menu.
  3. In "Attach Volume" window, specify the following values, then click [ Attach ] button.
    Instanceworking instance
    when you click the input box, instances ( which is possible to attach ) are displayed.
    If working instance dones not apper, check "Availability Zone".
    I specified the same value as original article.
  4. Attach destination volume to working instance ( the same steps as the above ).
    Instanceworking instance
    the same value as original article.

Create destination partition on working instance.

I did the following steps as root user.

  1. Create partition
    # parted /dev/xvdo --script 'mklabel msdos mkpart primary 1M -1s print quit'
  2. Notify to OS of creating partition
    # partprobe /dev/xvdo
  3. Notify to udevd
    # udevadm settle

Resize original volume


# e2fsck -f /dev/xvdm
# resize2fs -M /dev/xvdm
- Output from resize command:
Resizing the filesystem on /dev/xvdm to 1391485 (4k) blocks.
The filesystem on /dev/xvdm is now 1391485 blocks long.

The result of resizing is displayed. Note the block size and the number of blocks.

Replicate volume

Replicate from original to destination using "dd" command.
*Specify noted values to block size and block number.

# dd if=/dev/xvdm of=/dev/xvdo1 bs=4K count=1391485

Resize destination volume.

# resize2fs /dev/xvdo1

Prepare for grub installation

Prepare for the grub installation using chroot.

# mount /dev/xvdo1 /mnt
# cp -a /dev/xvdo /dev/xvdo1 /mnt/dev/
# rm -f /mnt/boot/grub/*stage*
# cp /mnt/usr/*/grub/*/*stage* /mnt/boot/grub/
# rm -f /mnt/boot/grub/

Install grub

Install grub using chroot.

# cat <<EOF | chroot /mnt grub --batch
> device (hd0) /dev/xvdo
> root (hd0,0)
> setup (hd0)
Probing devices to guess BIOS drives. This may take a long time.

    GNU GRUB  version 0.97  (640K lower / 3072K upper memory)

 [ Minimal BASH-like line editing is supported.  For the first word, TAB
   lists possible command completions.  Anywhere else TAB lists the possible
   completions of a device/filename.]
grub> device (hd0) /dev/xvdo
grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83
grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  31 sectors are embedded.
 Running "install /boot/grub/stage1 (hd0) (hd0)1+31 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded

Delete temp. device

Delete temporary device from the destination.

# rm -f /mnt/dev/xvdo /mnt/dev/xvdo1

Update the grub settings

# vi /mnt/boot/grub/menu.lst

Edit as the following

  • Change "root (hd0)" to "root (hd0,0)"
  • Change "console=*" to "console=ttyS0"
  • Add "xen_pv_hvm=enable" to the "kernel ..." line

The following is a sample

# created by imagebuilder

title Amazon Linux 2014.03 (3.10.42-52.145.amzn1.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-3.10.42-52.145.amzn1.x86_64 root=LABEL=/ console=ttyS0 LANG=ja_JP.UTF-8 KEYTABLE=us xen_pv_hvm=enable
initrd /boot/initramfs-3.10.42-52.145.amzn1.x86_64.img

Edit fstab

# vi /mnt/etc/fstab

Original article says need to edit fstab, but, in my environment, didn't need.

LABEL=/ / ext4 defaults,noatime 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0

Create Label to destination volume, then unmount

# e2label /dev/xvdo1 /
# sync
# umount /mnt

Create snapshot from destination volume

"vol-xxxxxxxx" is the ID of destination volume. set appropriate value to "description".

# aws ec2 create-snapshot --volume-id vol-xxxxxxxx --description "amazon-linux HVM snapshot"

*If error in aws command, do the following setting.

# aws configure
Default region name None: ap-northeast-1
Default output format None: json

Create HVM AMI from snapshot

Original article describes as the following, but it didn't work on my env. I did this step using aws management console.

#aws ec2 register-image --name "amazon-linux HVM" --description "amazon-linux HVM" --architecture x86_64 --root-device-name "/dev/sda1" --virtualization-type hvm --block-device-mappings ''


  1. Click [ELASTIC BLOCK STORE] > [Volumes]
  2. Right Click the snapshot, and click [Create Image] on context menu.
  3. In "Create Image from EBS Snapshot" window, specify appropriate values and click [Create] button. Create Image from EBS Snapshot

    *Specify HVM(Hardware-assisted virtualization) as Virtualization type

    *I specified /dev/sda1 as Root device name.

AMI (which support HVM) was completed in the above steps.

Launch T2 instance

Once The AMI has been created, you can launch an instance using this AMI.

Move to [IMAGES] > [AMIs] Menu, then Right click the AMI and click [ Launch ].

Then perform the operation in according with the wizard.
If the instance start up successfully, it's OK.


Although it is of course, please make sure if there is a problem with the operation, setting of the instance .

In my environment, part of the system setting was different from pre-migration, I re-configure in Initial settings of EC2 Instance(Amazon Linux)

In my case, there was no big problem. so I was also replaced Website after testing several hours. but I think you should confirm carefully.

If you doubt "Is this instance running as t2.micro?", please type "free" command as the following.

$ free

You can confirm that 1GB of memory is allocated.

$ free
             total       used       free     shared    buffers     cached
Mem:       1020536     927580      92956          0       7448     190036
-/+ buffers/cache:     730096     290440
Swap:      1310716      13664    1297052


With the advent of t2, I think Lineup of ec2 instaces has become more attractive.

I also did the simple performance test, and verified t2.micro has better performance than t1.micro. > t1.micro, t2.micro performance comparison

Considering the performance and price, when using the low-cost instances new, I think it's better to use t2.

Personally, even if you are already using t1 instance, i recommend you to migrate to t2.

Now,  it is not easy, but, it can be migrated by the above steps. ( It depends on the evn., I think this is easier than re-building ) 

I think it is worth to try.