Bug #3810
Wrong view of datastore capacity
Status: | Closed | Start date: | 05/15/2015 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | - | % Done: | 0% | |
Category: | - | |||
Target version: | - | |||
Resolution: | worksforme | Pull request: | ||
Affected Versions: | OpenNebula 4.12 |
Description
After creating a new cluster with it's own datastores there is a bug in the view of their capacity.
The new datastore shows the same usage and capacity like the main datastore on the front end host.
History
#1 Updated by Ruben S. Montero almost 6 years ago
- Status changed from Pending to Closed
- Resolution set to worksforme
This may be the case if using the same shared storage or under the same partition... Moving it as worksforme. (if it should not be the same we need the Datastore type, DS_MAD; and information about the storage configuration)
#2 Updated by Martijn Kint almost 6 years ago
I'm seeing exactly the same thing in our setup, additionally the existing datastore is not updated when new nodes are added to the cluster opennebula just keeps it at the same size.
#3 Updated by Ruben S. Montero almost 6 years ago
Can you post the configuration of the datastore (e.g. onedatastore show -x <dsid> for example)?
#4 Updated by Martijn Kint almost 6 years ago
Hi Ruben,
This is the output from our "main" datastore which currently comprises of 27 KVM host machines. The storage is divided as such:
20 x 2.7 TB = 54TB 7 x 1.1 TB = 7.7TB
However the datastore currently shows 50TB of capacity and does not change when we add new nodes. The second datastore is for the GPU nodes which have 732GB of local storage, eventually this cluster will comprise of 12 servers but currently holds only 2. Logic would dictate that that the GPU datastore would show 1.4TB instead of 50TB.
We also tried to mount the datastore on the gpu nodes at a different mount point but this doesn't work either and shows the free space in the opennebula server for whatever weird reason.
HPC Oort datastore:
<DATASTORE> <ID>104</ID> <UID>0</UID> <GID>0</GID> <UNAME>oneadmin</UNAME> <GNAME>oneadmin</GNAME> <NAME>local_images_ssd</NAME> <PERMISSIONS> <OWNER_U>1</OWNER_U> <OWNER_M>1</OWNER_M> <OWNER_A>0</OWNER_A> <GROUP_U>1</GROUP_U> <GROUP_M>0</GROUP_M> <GROUP_A>0</GROUP_A> <OTHER_U>0</OTHER_U> <OTHER_M>0</OTHER_M> <OTHER_A>0</OTHER_A> </PERMISSIONS> <DS_MAD><![CDATA[fs]]></DS_MAD> <TM_MAD><![CDATA[ssh]]></TM_MAD> <BASE_PATH><![CDATA[/var/lib/one//datastores/104]]></BASE_PATH> <TYPE>0</TYPE> <DISK_TYPE>0</DISK_TYPE> <STATE>0</STATE> <CLUSTER_ID>102</CLUSTER_ID> <CLUSTER>HPC Oort</CLUSTER> <TOTAL_MB>52426760</TOTAL_MB> <FREE_MB>47618148</FREE_MB> <USED_MB>4808612</USED_MB> <IMAGES> <ID>5</ID> <ID>6</ID> <ID>8</ID> <ID>16</ID> <ID>17</ID> <ID>32</ID> <ID>33</ID> <ID>35</ID> <ID>37</ID> <ID>38</ID> <ID>39</ID> <ID>41</ID> <ID>42</ID> <ID>66</ID> <ID>76</ID> <ID>77</ID> <ID>78</ID> <ID>79</ID> <ID>82</ID> <ID>83</ID> <ID>84</ID> <ID>92</ID> <ID>111</ID> <ID>112</ID> <ID>114</ID> <ID>117</ID> <ID>121</ID> <ID>124</ID> <ID>126</ID> <ID>132</ID> <ID>140</ID> <ID>141</ID> <ID>142</ID> <ID>145</ID> <ID>163</ID> <ID>171</ID> <ID>172</ID> <ID>190</ID> <ID>198</ID> <ID>200</ID> <ID>210</ID> <ID>221</ID> <ID>224</ID> <ID>238</ID> <ID>251</ID> <ID>252</ID> <ID>255</ID> <ID>259</ID> <ID>272</ID> <ID>276</ID> <ID>277</ID> <ID>282</ID> <ID>289</ID> <ID>290</ID> <ID>291</ID> <ID>293</ID> <ID>299</ID> <ID>300</ID> <ID>301</ID> <ID>307</ID> <ID>312</ID> <ID>320</ID> <ID>331</ID> <ID>333</ID> <ID>336</ID> <ID>337</ID> <ID>338</ID> <ID>340</ID> <ID>351</ID> <ID>357</ID> <ID>358</ID> <ID>393</ID> <ID>394</ID> <ID>402</ID> <ID>403</ID> <ID>405</ID> <ID>406</ID> <ID>431</ID> <ID>435</ID> <ID>438</ID> <ID>445</ID> <ID>450</ID> <ID>456</ID> <ID>457</ID> <ID>462</ID> <ID>463</ID> <ID>464</ID> <ID>478</ID> <ID>479</ID> <ID>482</ID> <ID>483</ID> <ID>484</ID> <ID>486</ID> <ID>496</ID> <ID>500</ID> <ID>501</ID> <ID>504</ID> <ID>505</ID> <ID>518</ID> <ID>528</ID> <ID>529</ID> <ID>534</ID> <ID>537</ID> <ID>538</ID> <ID>541</ID> <ID>543</ID> <ID>557</ID> <ID>558</ID> <ID>560</ID> <ID>562</ID> <ID>563</ID> <ID>564</ID> <ID>571</ID> <ID>580</ID> <ID>581</ID> <ID>583</ID> <ID>584</ID> <ID>585</ID> <ID>589</ID> <ID>590</ID> <ID>592</ID> <ID>594</ID> <ID>595</ID> <ID>598</ID> <ID>599</ID> <ID>601</ID> <ID>605</ID> <ID>606</ID> <ID>607</ID> <ID>609</ID> <ID>611</ID> <ID>612</ID> <ID>621</ID> <ID>624</ID> <ID>625</ID> <ID>627</ID> <ID>628</ID> <ID>635</ID> <ID>636</ID> <ID>637</ID> <ID>638</ID> <ID>639</ID> <ID>640</ID> <ID>641</ID> <ID>643</ID> <ID>644</ID> <ID>649</ID> <ID>656</ID> <ID>657</ID> <ID>658</ID> <ID>661</ID> <ID>662</ID> <ID>663</ID> <ID>665</ID> <ID>666</ID> <ID>668</ID> <ID>669</ID> <ID>674</ID> <ID>683</ID> <ID>684</ID> <ID>690</ID> <ID>693</ID> <ID>694</ID> <ID>696</ID> <ID>699</ID> <ID>700</ID> <ID>706</ID> <ID>707</ID> <ID>708</ID> <ID>713</ID> <ID>715</ID> <ID>716</ID> <ID>720</ID> <ID>726</ID> <ID>727</ID> <ID>732</ID> <ID>735</ID> <ID>740</ID> <ID>747</ID> <ID>749</ID> <ID>750</ID> <ID>751</ID> <ID>754</ID> <ID>755</ID> <ID>756</ID> <ID>757</ID> <ID>760</ID> <ID>761</ID> <ID>767</ID> <ID>768</ID> <ID>771</ID> <ID>777</ID> <ID>778</ID> <ID>785</ID> <ID>786</ID> <ID>791</ID> <ID>792</ID> <ID>795</ID> <ID>798</ID> <ID>801</ID> <ID>804</ID> <ID>808</ID> <ID>809</ID> <ID>810</ID> <ID>813</ID> <ID>815</ID> <ID>816</ID> <ID>817</ID> <ID>819</ID> <ID>825</ID> <ID>827</ID> <ID>830</ID> <ID>832</ID> <ID>833</ID> <ID>834</ID> <ID>839</ID> <ID>843</ID> <ID>846</ID> <ID>847</ID> <ID>862</ID> <ID>868</ID> <ID>871</ID> <ID>872</ID> <ID>876</ID> <ID>877</ID> <ID>878</ID> <ID>880</ID> <ID>884</ID> <ID>886</ID> <ID>888</ID> <ID>889</ID> <ID>890</ID> <ID>894</ID> <ID>900</ID> <ID>903</ID> <ID>905</ID> <ID>907</ID> <ID>908</ID> <ID>910</ID> <ID>914</ID> <ID>916</ID> <ID>917</ID> <ID>918</ID> <ID>919</ID> <ID>921</ID> <ID>922</ID> <ID>923</ID> <ID>926</ID> <ID>928</ID> <ID>929</ID> <ID>930</ID> <ID>931</ID> <ID>932</ID> <ID>933</ID> <ID>934</ID> <ID>936</ID> <ID>937</ID> <ID>938</ID> <ID>940</ID> <ID>941</ID> <ID>942</ID> <ID>946</ID> <ID>948</ID> <ID>949</ID> <ID>956</ID> <ID>958</ID> <ID>960</ID> <ID>962</ID> <ID>963</ID> <ID>965</ID> <ID>967</ID> <ID>970</ID> <ID>971</ID> <ID>974</ID> <ID>975</ID> <ID>976</ID> <ID>977</ID> <ID>978</ID> <ID>981</ID> <ID>985</ID> <ID>986</ID> <ID>987</ID> <ID>988</ID> <ID>990</ID> <ID>991</ID> <ID>992</ID> <ID>996</ID> <ID>997</ID> <ID>998</ID> <ID>1005</ID> <ID>1012</ID> <ID>1013</ID> <ID>1016</ID> </IMAGES> <TEMPLATE> <BASE_PATH><![CDATA[/var/lib/one//datastores/]]></BASE_PATH> <CLONE_TARGET><![CDATA[SYSTEM]]></CLONE_TARGET> <DATASTORE_CAPACITY_CHECK><![CDATA[YES]]></DATASTORE_CAPACITY_CHECK> <DISK_TYPE><![CDATA[FILE]]></DISK_TYPE> <DS_MAD><![CDATA[fs]]></DS_MAD> <LN_TARGET><![CDATA[SYSTEM]]></LN_TARGET> <TM_MAD><![CDATA[ssh]]></TM_MAD> <TYPE><![CDATA[IMAGE_DS]]></TYPE> </TEMPLATE> </DATASTORE>
GPU datastore:
<DATASTORE> <ID>115</ID> <UID>0</UID> <GID>0</GID> <UNAME>oneadmin</UNAME> <GNAME>oneadmin</GNAME> <NAME>images_ssd_gpu</NAME> <PERMISSIONS> <OWNER_U>1</OWNER_U> <OWNER_M>1</OWNER_M> <OWNER_A>0</OWNER_A> <GROUP_U>1</GROUP_U> <GROUP_M>0</GROUP_M> <GROUP_A>0</GROUP_A> <OTHER_U>0</OTHER_U> <OTHER_M>0</OTHER_M> <OTHER_A>0</OTHER_A> </PERMISSIONS> <DS_MAD><![CDATA[fs]]></DS_MAD> <TM_MAD><![CDATA[ssh]]></TM_MAD> <BASE_PATH><![CDATA[/var/lib/one/datastores/115]]></BASE_PATH> <TYPE>0</TYPE> <DISK_TYPE>0</DISK_TYPE> <STATE>0</STATE> <CLUSTER_ID>103</CLUSTER_ID> <CLUSTER>GPU Oort</CLUSTER> <TOTAL_MB>52426760</TOTAL_MB> <FREE_MB>47618148</FREE_MB> <USED_MB>4808612</USED_MB> <IMAGES/> <TEMPLATE> <BASE_PATH><![CDATA[/var/lib/one/datastores/]]></BASE_PATH> <CLONE_TARGET><![CDATA[SYSTEM]]></CLONE_TARGET> <DATASTORE_CAPACITY_CHECK><![CDATA[YES]]></DATASTORE_CAPACITY_CHECK> <DISK_TYPE><![CDATA[FILE]]></DISK_TYPE> <DS_MAD><![CDATA[fs]]></DS_MAD> <LN_TARGET><![CDATA[SYSTEM]]></LN_TARGET> <TM_MAD><![CDATA[ssh]]></TM_MAD> <TYPE><![CDATA[IMAGE_DS]]></TYPE> </TEMPLATE> </DATASTORE>
#5 Updated by Ruben S. Montero almost 6 years ago
Hi,
These are ssh datastores, the actual size (as is distributed) is shown per each host. If you do a onehost show, you'll see the available space and total for the datastore in that particular host. That is the metric considered by the scheduler when allocating VMs to that host.
The information shown in the onedatastore output is for the front-end. Does it correspond with your setup?
Cheers