ceph-osd charm needs support for multiple journals
Bug #1495878 reported by
Peter Sabaini
This bug affects 6 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
ceph-osd (Juju Charms Collection) |
Fix Released
|
High
|
Peter Sabaini |
Bug Description
Currently the ceph-osd charm only supports one dedicated journal device per unit. This is a problem for nodes with many disks: a) throughput towards a single journal will become a limiting factor, and b) since a failing journal will take down all associated OSDs with it, many OSDs sharing a single journal will make for a large failure domain. It would be desirable to have the ceph-osd charm support multiple journal devices.
Related branches
lp://qastaging/~peter-sabaini/charms/trusty/ceph-osd/ceph-osd-multijournal
- Billy Olsen: Needs Resubmitting
- charmers: Pending requested
-
Diff: 194 lines (+79/-20)2 files modifiedhooks/ceph.py (+36/-8)
hooks/hooks.py (+43/-12)
lp://qastaging/~peter-sabaini/charms/trusty/ceph-osd/ceph-osd-multijournal-next
- James Page: Needs Fixing
- Chris Holcombe (community): Approve
-
Diff: 207 lines (+84/-19) (has conflicts)2 files modifiedhooks/ceph.py (+39/-7)
hooks/ceph_hooks.py (+45/-12)
lp://qastaging/~james-page/charms/trusty/ceph-osd/multijournal-fixes
- Chris MacNaughton (community): Approve
- OpenStack Charmers: Pending requested
-
Diff: 216 lines (+96/-20)2 files modifiedhooks/ceph.py (+42/-7)
hooks/ceph_hooks.py (+54/-13)
tags: | added: openstack |
Changed in ceph-osd (Juju Charms Collection): | |
milestone: | none → 15.10 |
assignee: | nobody → Peter Sabaini (peter-sabaini) |
importance: | Undecided → Low |
status: | New → In Progress |
Changed in ceph-osd (Juju Charms Collection): | |
milestone: | 15.10 → 16.01 |
Changed in ceph-osd (Juju Charms Collection): | |
milestone: | 16.01 → 16.04 |
Changed in ceph-osd (Juju Charms Collection): | |
importance: | Low → High |
Changed in ceph-osd (Juju Charms Collection): | |
status: | In Progress → Fix Committed |
Changed in ceph-osd (Juju Charms Collection): | |
status: | Fix Committed → Fix Released |
To post a comment you must log in.
Hi James is this ready for download yet. I am surprised that more then one Journal disks are not part of JUJU yet. I deployed PISTON CLOUD 2 years ago that could handle this and I have studied CEPH as the core disk system of choice for how best to deploy it. Given your disk system is CRITICAL to your cloud it was obvious to me reading the ceph details that the best way to deploy it was with 2 x Journals per server on SSD, DC quality Intel S3700's. You do not Raid them as it doubles your chance of failure you just put 50% of your OSD on 1 journal and 50% on the other. This means you can only lose 50% of your OSD's max per journal failure on any given server. Each of my CEPH nodes has 2 x S3700's and 6 OSDs. I was wanting to move from Piston now discontinued to JUJU and I see it looks like you just identified this problem now. How soon can I use your patch can I use it right now?