enlisting EBS volumes failing with unclear messaging

Bug #1629921 reported by Charles Butler
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
ceph (Juju Charms Collection)
Fix Released
High
Chris MacNaughton
ceph-osd (Juju Charms Collection)
Fix Released
High
Chris MacNaughton

Bug Description

I have provisioned 3 ceph-osd applications, 3 ceph-mon applications and started by enlisting a single 5gb ebs volume on each to flex the ceph deployment.

I was greeted with a failed storage-relation-changed hook, with the following stack trace:

-- snipped and uploaded as an attachment due to formatting --

Revision history for this message
Charles Butler (lazypower) wrote :
description: updated
Revision history for this message
Matt Bruzek (mbruzek) wrote :

I got this same bug today here are the log messages that we got:

ceph bootstrapped, rescanning disks
Making dir /var/lib/charm/ceph-osd ceph:ceph 555
Monitor hosts are ['172.31.25.226:6789', '172.31.33.254:6789', '172.31.6.136:6789']
Looks like /dev/xvdb is already an OSD data or journal, skipping.
osdize cmd: ['ceph-disk', 'prepare', '--fs-type', u'xfs', u'/dev/xvdf']
Creating new GPT entries.
Could not create partition 2 from 2048 to 2099199
Setting name!
partNum is 1
REALLY setting name!
Unable to set partition 2's name to 'ceph journal'!
Could not change partition 2's type code to 45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
Traceback (most recent call last):
   File "/usr/sbin/ceph-disk", line 9, in <module>
     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4995, in run
     main(sys.argv[1:])
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4948, in main
     main_catch(args.func, args)
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4973, in main_catch
     func(args)
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1774, in main
     Prepare.factory(args).prepare()
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1762, in prepare
     self.prepare_locked()
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1794, in prepare_locked
     self.data.prepare(self.journal)
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2446, in prepare
     self.prepare_device(*to_prepare_list)
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2622, in prepare_device
     to_prepare.prepare()
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1964, in prepare
     self.prepare_device()
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2054, in prepare_device
     num=num)
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1522, in create_partition
     self.path,
   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 439, in command_check_call
     return subprocess.check_call(arguments)
   File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
     raise CalledProcessError(retcode, cmd)
 subprocess.CalledProcessError: Command '['/sbin/sgdisk', '--new=2:0:+1024M', '--change-name=2:ceph journal', '--partition-guid=2:09182b21-4a61-4852-b147-fef8cd3bc4c0', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/xvdf']' returned non-zero exit status 4G

Revision history for this message
Matt Bruzek (mbruzek) wrote :

Reproduction steps:

https://gist.github.com/anonymous/1248995df1abe70d5fff0976446fda6d

juju deploy ./bundle.yaml

juju add-storage ceph-osd/0 osd-devices=ebs,10gb
juju add-storage ceph-osd/1 osd-devices=ebs,10gb
juju add-storage ceph-osd/2 osd-devices=ebs,10gb

Then after the bundle was up we got an error in the ceph-osd charm:

hook failed: "storage-attached"

The logs are above, and look similar to Chuck's original report.

James Page (james-page)
Changed in ceph (Juju Charms Collection):
status: New → Triaged
Changed in ceph-osd (Juju Charms Collection):
status: New → Triaged
Changed in ceph (Juju Charms Collection):
importance: Undecided → High
Changed in ceph-osd (Juju Charms Collection):
importance: Undecided → High
Changed in ceph (Juju Charms Collection):
milestone: none → 17.01
Changed in ceph-osd (Juju Charms Collection):
milestone: none → 17.01
Changed in ceph (Juju Charms Collection):
assignee: nobody → Chris MacNaughton (chris.macnaughton)
Changed in ceph-osd (Juju Charms Collection):
assignee: nobody → Chris MacNaughton (chris.macnaughton)
status: Triaged → In Progress
Changed in ceph (Juju Charms Collection):
status: Triaged → In Progress
Revision history for this message
Ryan Beisner (1chb1n) wrote :

Fix landed in master. Landing the proposed cherry pick / backport to stable: https://review.openstack.org/#/q/topic:bug/1629921

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph-osd (stable/16.10)

Reviewed: https://review.openstack.org/388062
Committed: https://git.openstack.org/cgit/openstack/charm-ceph-osd/commit/?id=2862198ac1b830d40b16e579b762281a625ec089
Submitter: Jenkins
Branch: stable/16.10

commit 2862198ac1b830d40b16e579b762281a625ec089
Author: Chris MacNaughton <email address hidden>
Date: Mon Oct 17 16:24:07 2016 -0400

    Add minimum-size to osd-devices

    This stops an error that happens when size is not
    specified when adding storage via Juju storage hooks.
    Without a set minimum, Juju will give 1G to a disk
    which will cause ceph-disk to fail when connecting
    the new disk.

    Closes-Bug: 1629921
    Change-Id: Ib57314945b1f0bf8995029f5506543bc1b53c89b
    (cherry picked from commit d045424c54bf98b47fcd3837b7c8717358425a4b)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-ceph (stable/16.10)

Reviewed: https://review.openstack.org/388054
Committed: https://git.openstack.org/cgit/openstack/charm-ceph/commit/?id=2c6e7598f882273708e10c64ae2cc0a58b0ad02d
Submitter: Jenkins
Branch: stable/16.10

commit 2c6e7598f882273708e10c64ae2cc0a58b0ad02d
Author: Chris MacNaughton <email address hidden>
Date: Mon Oct 17 16:24:20 2016 -0400

    Add minimum-size to osd-devices

    This stops an error that happens when size is not
    specified when adding storage via Juju storage hooks.
    Without a set minimum, Juju will give 1G to a disk
    which will cause ceph-disk to fail when connecting
    the new disk.

    Closes-Bug: 1629921
    Change-Id: Id959ee65ded03b95933a1deb099f492fccc0c182
    (cherry picked from commit 0ed46719558d348d8014b3283547c5c5c72cc128)

Ryan Beisner (1chb1n)
tags: added: juju-storage uosci
Changed in ceph (Juju Charms Collection):
status: In Progress → Fix Committed
Changed in ceph-osd (Juju Charms Collection):
status: In Progress → Fix Committed
James Page (james-page)
Changed in ceph (Juju Charms Collection):
status: Fix Committed → Fix Released
Changed in ceph-osd (Juju Charms Collection):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.