Feature #5208
Add CEPH_KEY attribute to ceph drivers to allow for multi ceph cluster deployments which require authentication
Status: | Closed | Start date: | 06/28/2017 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | - | % Done: | 0% | |
Category: | Drivers - Storage | |||
Target version: | Release 5.4 | |||
Resolution: | fixed | Pull request: |
Description
Currently ONE supports multiple Ceph datastores through the CEPH_USER / CEPH_CONF options (although undocumented AFAIK). However, if a Ceph cluster requires authentication (auth_cluster_required = cephx, auth_service_required = cephx, auth_client_required = cephx) this is not enough: you need to specify the keyring for the specific cluster. Adding a "CEPH_KEY" option to the datastore attribute would fix that.
Use case: https://forum.opennebula.org/t/question-about-image-migration-between-datastores-and-ceph/3606
Associated revisions
F #5208: push down ceph_key to downloader to import rbd in marketplace
F #5208:Add CEPH_KEY to inherit attributes, so it is added to VM DISK
History
#1 Updated by Stefan Kooman about 4 years ago
#2 Updated by Stefan Kooman about 4 years ago
PR was not correct / complete. I'll create a new PR.
#3 Updated by Stefan Kooman about 4 years ago
#4 Updated by Stefan Kooman about 4 years ago
Note: clone option does not support cloning between different ceph clusters, only within a cluster. ONE does not prevent you from trying ... but it will fail. The driver will need some work to allow for cross cluster cloning.
#5 Updated by Ruben S. Montero about 4 years ago
Code is now in the repository. We need to update documentation. I've filled an issue for the other request #5212
#6 Updated by Ruben S. Montero about 4 years ago
- Status changed from Pending to Closed
- Resolution set to fixed
#7 Updated by Stefan Kooman almost 4 years ago
During RC1 test I found a made a bug in the monitor driver, and forgot to add CEPH_KEY to delete operation. Those are now fixen in this PR: