ceph incremental backup fails in mitaka
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Medium
|
Eric Harney | ||
os-brick |
Fix Committed
|
Low
|
Xiaojun Liao |
Bug Description
When I try to backup volume (Ceph backend) via "cinder backup" to 2nd Ceph cluster cinder create a full backup each time instead diff backup.
mitaka release
cinder-backup 2:8.0.0-0ubuntu1 all Cinder storage service - Scheduler server
cinder-common 2:8.0.0-0ubuntu1 all Cinder storage service - common files
cinder-volume 2:8.0.0-0ubuntu1 all Cinder storage service - Volume server
python-cinder 2:8.0.0-0ubuntu1 all Cinder Python libraries
My steps are:
1. cinder backup-create a3bacaf5-
2. cinder backup-create --incremental --force a3bacaf5-
and what I have in Ceph backup cluster:
rbd --cluster bak -p backups du
volume-
volume-
Changed in cinder: | |
assignee: | nobody → Tom Barron (tpb) |
summary: |
- backup to 2nd ceph cluster + ceph incremental backup fails in mitaka |
tags: | added: backup-service mitaka-backport-potential |
tags: | added: ceph |
Changed in cinder: | |
assignee: | nobody → Jon Bernard (jbernard) |
Changed in cinder: | |
assignee: | Jon Bernard (jbernard) → nobody |
importance: | Undecided → Medium |
tags: | removed: mitaka-backport-potential |
Changed in cinder: | |
assignee: | Shubham (shubham0d) → nobody |
Changed in cinder: | |
status: | Confirmed → New |
status: | New → Confirmed |
Changed in cinder: | |
assignee: | nobody → Xiaojun Liao (wwba) |
Changed in cinder: | |
assignee: | Xiaojun Liao (wwba) → nobody |
Changed in os-brick: | |
assignee: | nobody → Xiaojun Liao (wwba) |
information type: | Public → Public Security |
information type: | Public Security → Public |
information type: | Public → Private Security |
information type: | Private Security → Public |
Changed in cinder: | |
assignee: | nobody → Chaynika Saikia (csaikia) |
Changed in cinder: | |
status: | Confirmed → In Progress |
Changed in cinder: | |
assignee: | Chaynika Saikia (csaikia) → zheng yin (yin-zheng) |
Changed in cinder: | |
assignee: | zheng yin (yin-zheng) → nobody |
assignee: | nobody → zheng yin (yin-zheng) |
Changed in cinder: | |
assignee: | zheng yin (yin-zheng) → nobody |
Changed in os-brick: | |
status: | In Progress → Fix Committed |
Changed in cinder: | |
status: | In Progress → New |
Changed in cinder: | |
assignee: | nobody → Alan Bishop (alan-bishop) |
Changed in os-brick: | |
importance: | Undecided → Low |
information type: | Public → Public Security |
information type: | Public Security → Public |
I try to debug those error.
ceph.py at 859 line: is_rbd( volume_ file):
self. _backup_ rbd(backup_ id, volume_id, volume_file,
volume_ name, length) BackupRBDOperat ionFailed:
LOG.debug( "Forcing full backup of volume %s.", volume_id)
do_full_ backup = True
do_ full_backup = True
do_full_backup = False
if self._file_
# If volume an RBD, attempt incremental backup.
try:
except exception.
else:
if do_full_backup:
self. _full_backup( backup_ id, volume_id, volume_file,
volume_ name, length)
but something goes wrong and function _file_is_rbd did return FALSE.
let's see to '_file_is_rbd' function in ceph.py at 683 line: volume_ file, 'rbd_image')
def _file_is_rbd(self, volume_file):
"""Returns True if the volume_file is actually an RBD image."""
return hasattr(
It mean that attribute 'rbd_image' did not assigned.
'rbd_image' attribute belong to class 'RBDImageIOWrapper' at cinder/ volume/ drivers/ rbd.py.
I try print out contents 'volume_file' array: cache', '_abc_negative_ cache_version' , '_abc_registry', '_checkClosed', '_checkReadable', '_checkSe
['__abs
tractmethods__', '__class__', '__delattr__', '__doc__', '__enter__', '__exit__', '__format__', '__getattribute__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__re
duce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_abc_cache', '_abc_negative_
ekable', '_checkWritable', '_inc_offset', 'close', 'closed', 'fileno', 'flush', 'isatty', 'next', 'read', 'readable', 'readall', 'readline', 'readlines', 'seek', 'seekable', 'tell', 'truncate', 'writable', 'write
', 'writelines']'
those cells of array is in class RBDVolumeIOWrapper at os_brick/ initiator/ linuxrbd. py
In sum cinder-backup didn't identify image as rbd.
my cinder.conf at block node: rootwrap. conf api-paste. ini name_template = volume-%s cinder/ volumes backup. drivers. ceph ceph_chunk_ size = 134217728 ceph_stripe_ unit = 0 ceph_stripe_ count = 0 discard_ excess_ bytes = true /cinder: XXXXXXX@ controller/ cinder _rabbit] authtoken] controller: 5000/v2. 0 controller: 35357 volume. drivers. lvm.LVMVolumeDr iver
[DEFAULT]
debug = true
rootwrap_config = /etc/cinder/
api_paste_confg = /etc/cinder/
iscsi_helper = tgtadm
volume_
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.30.17.21
enabled_backends = lvm,rbd
glance_host = controller
control_exchange = cinder
notification_driver = messagingv2
backup_driver = cinder.
backup_ceph_conf = /etc/ceph/bak.conf
backup_ceph_user = cinder-backup
backup_
backup_ceph_pool = backups
backup_
backup_
restore_
[database]
connection = mysql:/
[oslo_messaging
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = XXXXXXXXX
[keystone_
auth_uri = http://
identity_uri = http://
admin_tenant_name = service
admin_user = cinder
admin_password = XXXXXXX
[lvm]
volume_driver = cinder.
volume_group = cinder-vo...