Bug #3760
Not possible to save a persistent image or do a snanpshot when installing an image from scratch through ON
Status: | Closed | Start date: | 04/20/2015 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | Jaime Melis | % Done: | 0% | |
Category: | Core & System | |||
Target version: | Release 5.0 | |||
Resolution: | fixed | Pull request: | ||
Affected Versions: | OpenNebula 4.12 |
Description
Hello,
According to link http://docs.opennebula.org/4.12/user/virtual_machine_setup/add_content.html, it says that in the template created to read from the cdrom and install the OS in the new image created, it is necessary to set the OS/BOOT parameter to cdrom. If I do that, it creates the following deployment file for KVM:
{{{
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-150</name>
<cputune>
<shares>2048</shares>
</cputune>
<memory>2097152</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='cdrom'/>
</os>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/103/150/disk.0'/>
<target dev='vda'/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.1'/>
<target dev='hda'/>
<readonly/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.2'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
[ .... ]
}}}
This deployment file does not work for me because after finishing the installation, it always tries to read from dev cdrom so it does not see the new installation after rebooting and even if the image created to save the installation is persistent, it does not save anything when I make a shutdown of the VM because it is not seeing the hdd and also, I cannot do a snapshot if the image is not persistent because the same reason. Therefore, when I tried to make a shutdown or snapshot, ON does not give any error and in the case of a persistent image, it does not do anything and in case of a snapshot, it remains doing the snapshot forever.
On the other hand, if in the template created to read from the cdrom and install in the new image created, if iin the OS booting section for the template, I selected as 1st boot “HD’ and as second boot “CDROOM”, it creates the following deployment file for KVM which it works perfectly for me:
{{{
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-150</name>
<cputune>
<shares>2048</shares>
</cputune>
<memory>2097152</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
</os>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/103/150/disk.0'/>
<target dev='vda'/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.1'/>
<target dev='hda'/>
<readonly/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.2'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
[ .... ]
}}}
Thanks in advance,
Esteban
History
#1 Updated by Ruben S. Montero about 6 years ago
Saving an image has nothing to do on how it is exposed to hypervisor (hdd...) Could you post the template (OpenNebula) for the VM and images.
#2 Updated by Ruben S. Montero about 6 years ago
- Target version deleted (
Release 4.12.1)
#3 Updated by Ruben S. Montero almost 6 years ago
- Status changed from Pending to Closed
- Resolution set to worksforme
I think that this was solved in the forum... closing this. Will reopen if needed
#4 Updated by Esteban Freire Garcia almost 6 years ago
Hi Ruben,
Sorry, I missed the post in where you asked me about the template.
I need to reopen the issue since we are still getting this issue.
This is a recent example:
CONTEXT=[
FILES_DS="$FILE[IMAGE_ID=341]",
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="2"
DISK=[
IMAGE="centos_testing_scratch_image_esteban",
IMAGE_UNAME="testing-acls" ]
DISK=[
IMAGE="test_centos_scratch_esteban_ISO",
IMAGE_UNAME="testing-acls" ]
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="VNC" ]
INPUT=[
BUS="usb",
TYPE="tablet" ]
LOGO="images/logos/centos.png"
MEMORY="4096"
NIC=[
NETWORK="internet",
NETWORK_UNAME="oneadmin" ]
NIC=[
NETWORK="testing-acls.int",
NETWORK_UNAME="testing-acls" ]
OS=[
ARCH="x86_64",
BOOT="cdrom,hd" ]
SUNSTONE_CAPACITY_SELECT="YES"
SUNSTONE_NETWORK_SELECT="YES"
VCPU="2"
If I make the centos installation from scratch using this template, it does not work. when the installationWith boot order, 1st cdroom and 2nd HD is not working and it is not possible to save the VM after the installation even if the image is persistent. Also, the installation is finished and the VM is rebooted from the inside after the installation, the installation is started again instead of start the new OS.
Therefore, it is necessary to select as 1st boot “HD’ and as second boot “CDROOM” at the OS booting section for the template created to read from the cdrom and install in the OS, otherwise, changes are not saved. Doing this, the installation is smooth and when the installation is finished and the VM is restarted from inside, the new OS is started without any issue. This is the template used:
CONTEXT=[
FILES_DS="$FILE[IMAGE_ID=341]",
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
CPU="2"
DISK=[
IMAGE="centos_testing_scratch_image_esteban",
IMAGE_UNAME="testing-acls" ]
DISK=[
IMAGE="test_centos_scratch_esteban_ISO",
IMAGE_UNAME="testing-acls" ]
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="VNC" ]
INPUT=[
BUS="usb",
TYPE="tablet" ]
LOGO="images/logos/centos.png"
MEMORY="4096"
NIC=[
NETWORK="internet",
NETWORK_UNAME="oneadmin" ]
NIC=[
NETWORK="testing-acls.int",
NETWORK_UNAME="testing-acls" ]
OS=[
ARCH="x86_64",
BOOT="hd,cdrom" ]
SUNSTONE_CAPACITY_SELECT="YES"
SUNSTONE_NETWORK_SELECT="YES"
VCPU="2"
Ant these are the images used:
[oneadmin@opennebula4 ~]$ oneimage show 1253
IMAGE 1253 INFORMATION
ID : 1253
NAME : test_centos_scratch_esteban_ISO
USER : testing-acls
GROUP : testing-acls
DATASTORE : local_images_ssd
TYPE : CDROM
REGISTER TIME : 10/07 14:42:43
PERSISTENT : No
SOURCE : /var/lib/one//datastores/104/d671ad5923177de764674cae020a5a2a
PATH : http://ftp.nluug.nl/ftp/pub/os/Linux/distr/CentOS/7/isos/x86_64/CentOS-7-x86_64-DVD-1503-01.iso
SIZE : 4G
STATE : rdy
RUNNING_VMS : 0
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
IMAGE TEMPLATE
DEV_PREFIX="hd"
VIRTUAL MACHINES
[oneadmin@opennebula4 ~]$ oneimage show 1255
IMAGE 1255 INFORMATION
ID : 1255
NAME : centos_testing_scratch_image_esteban
USER : testing-acls
GROUP : testing-acls
DATASTORE : local_images_ssd
TYPE : DATABLOCK
REGISTER TIME : 10/07 14:50:54
PERSISTENT : Yes
SOURCE : /var/lib/one//datastores/104/d736ef532232b46c62e478e587205299
FSTYPE : raw
SIZE : 9.8G
STATE : rdy
RUNNING_VMS : 0
PERMISSIONS
OWNER : um-
GROUP : ---
OTHER : ---
IMAGE TEMPLATE
DEV_PREFIX="vd"
VIRTUAL MACHINES
It is possible to find more information about it on the thread that I opened at the OpenNebula forum, https://forum.opennebula.org/t/boot-failed-not-a-bootable-disk-no-bootable-device-after-installing-a-ubuntu-image-through-on/601/5
For some reason, it it is only seeing the cd-room if I configure as first boot dev the cd-room so ON is not able to save anything because the boot dev is not the expected. I think is something on the virt-manager side.
Sorry for reopening the ticket again and thanks in advance,
Esteban
#5 Updated by Esteban Freire Garcia almost 6 years ago
Could you please reopen this ticket?
Thanks in advance,
Esteban
#6 Updated by Ruben S. Montero almost 6 years ago
- Status changed from Closed to Pending
- Target version set to Release 4.14.2
#7 Updated by Carlos Martín over 5 years ago
- Category changed from Sunstone to Core & System
- Resolution deleted (
worksforme)
#8 Updated by Ruben S. Montero over 5 years ago
- Assignee set to Jaime Melis
#9 Updated by Jaime Melis over 5 years ago
Hi Esteban,
it works for me... So, we could sum up this issue like this:
- boot=cdrom => you cannot see the persistent disk from the installation environment, therefore you can't even install
- boot=hd,boot=cdrom => you can see the persistent disk, after installation everything works fine
So, in short, what we're trying to determine here is why you can't se the persistent disk so you can install to it when boot=cdrom is selected.
Is it exactly like that?
#10 Updated by Esteban Freire Garcia over 5 years ago
Hola Jaime,
It does not matter if the disk/image is persistent or not, we got the same issue if we try BOOT="cdrom,hd" so we need to set up BOOT="hd,cdrom" in order to work for us.
Yes, in short, what we are trying to determinate is why we cannot see the installation performed after installing an OS from scratch with BOOT="cdrom,hd" selected and reboot the VM after the installation is completed.
As I told on my first post, I think it is because how the kvm template is written. If we choose BOOT="cdrom,hd", this is the kvm template generated on the node:
{{{
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-150</name>
<cputune>
<shares>2048</shares>
</cputune>
<memory>2097152</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='cdrom'/>
</os>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/103/150/disk.0'/>
<target dev='vda'/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.1'/>
<target dev='hda'/>
<readonly/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.2'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
[ .... ]
}}}
On the other hand, this is kvm template generated on the node if we choose BOOT="hd,cdrom":
{{{
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>one-150</name>
<cputune>
<shares>2048</shares>
</cputune>
<memory>2097152</memory>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
</os>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<source file='/var/lib/one//datastores/103/150/disk.0'/>
<target dev='vda'/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.1'/>
<target dev='hda'/>
<readonly/>
<driver name='qemu' type='raw' cache='none'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/one//datastores/103/150/disk.2'/>
<target dev='hdb'/>
<readonly/>
<driver name='qemu' type='raw'/>
</disk>
[ .... ]
}}}
As you can see, I think the main difference is that when selecting BOOT="cdrom,hd" the OS section only includes the cdrom:
<os>
<type arch='x86_64'>hvm</type>
<boot dev='cdrom'/>
</os>
While if we selected, BOOT="hd,cdrom", it includes both:
<os>
<type arch='x86_64'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
</os>
I tested this with a person from CESGA on last April and they are not getting this issue, therefore, maybe it is some kind of issue related to the kvm version that we are using, so I send you the relevant versions packages on our nodes:
[root@node13 ~]# rpm -qa | grep -i kvm
qemu-kvm-2.1.3-8.fc21.x86_64
opennebula-node-kvm-4.12.1-1.x86_64
libvirt-daemon-kvm-1.2.9.3-2.fc21.x86_64
[root@node13 ~]# rpm -qa | grep -i virt
libvirt-daemon-config-nwfilter-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-xen-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-network-1.2.9.3-2.fc21.x86_64
libvirt-1.2.9.3-2.fc21.x86_64
libvirt-python-1.2.9-2.fc21.x86_64
libvirt-daemon-driver-nwfilter-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-secret-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-libxl-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-uml-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-vbox-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-storage-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-lxc-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-qemu-1.2.9.3-2.fc21.x86_64
libvirt-daemon-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-nodedev-1.2.9.3-2.fc21.x86_64
libvirt-daemon-driver-interface-1.2.9.3-2.fc21.x86_64
libvirt-daemon-config-network-1.2.9.3-2.fc21.x86_64
libvirt-daemon-kvm-1.2.9.3-2.fc21.x86_64
libvirt-client-1.2.9.3-2.fc21.x86_64
[root@node13 ~]# cat /etc/redhat-release
Fedora release 21 (Twenty One)
I think only a person has also complained about this on the forum so I am not sure if we are the only ones getting this issue or people don't install OS from scratch through OpenNebula.
I hope is clear know what we tried to described on this bug but in any case, please let me know any doubt about it :)
Thanks in advance,
Esteban
#11 Updated by Jaime Melis over 5 years ago
I have tried to replicate the issue, but I haven't been able to.
Using the official OpenNebula 4.12.1 packages, in CentOS 7, I have created this template:
CPU="2" DISK=[ IMAGE="ttyvd" ] DISK=[ IMAGE="tc-cd" ] MEMORY="128" OS=[ ARCH="x86_64", BOOT="cdrom,hd" ]
The deployment file generated is:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>one-0</name> <cputune> <shares>2048</shares> </cputune> <memory>131072</memory> <os> <type arch='x86_64'>hvm</type> <boot dev='cdrom'/> <boot dev='hd'/> </os> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/one//datastores/0/0/disk.0'/> <target dev='vda'/> <driver name='qemu' type='qcow2' cache='none'/> </disk> <disk type='file' device='cdrom'> <source file='/var/lib/one//datastores/0/0/disk.1'/> <target dev='hda'/> <readonly/> <driver name='qemu' type='raw' cache='none'/> </disk> </devices> <features> <acpi/> </features> </domain>
As you can see, both boot devices appear correctly.
Looking at the code that generates this code:
https://github.com/OpenNebula/one/blob/release-4.12.1/src/vmm/LibVirtDriverKVM.cc#L367
boots = one_util::split(boot, ','); for (vector<string>::const_iterator it=boots.begin(); it!=boots.end(); it++) { file << "\t\t<boot dev='" << *it << "'/>" << endl; }
one_util::split is defined here: https://github.com/OpenNebula/one/blob/release-4.12.1/src/common/NebulaUtil.cc#L227
I don't understand how the behaviour you are reporting is happening.
My only idea is that maybe there's a weird/hidden character in the BOOT parameter string of the template. Could this be possible? Can you remove it, type it again carefully and check again?
Otherwise, given a VM that presents the inconsistency, please send me:- onevm show -x <id>
- deployment file
- using gdb to figure out where the problem i-
- try again in a new installation
#12 Updated by Jaime Melis over 5 years ago
- Target version changed from Release 4.14.2 to Release 5.0
#13 Updated by Ruben S. Montero about 5 years ago
- Status changed from Pending to Closed
- Resolution set to fixed
We have now a flexible boot order section, which can be updated in the poweroff state to simplify the install process. This features should solve this issue. Closing this for 5.0