ceph-osd charm needs support for multiple journals

Bug #1495878 reported by Peter Sabaini
36
This bug affects 6 people
Affects Status Importance Assigned to Milestone
ceph-osd (Juju Charms Collection)
Fix Released
High
Peter Sabaini

Bug Description

Currently the ceph-osd charm only supports one dedicated journal device per unit. This is a problem for nodes with many disks: a) throughput towards a single journal will become a limiting factor, and b) since a failing journal will take down all associated OSDs with it, many OSDs sharing a single journal will make for a large failure domain. It would be desirable to have the ceph-osd charm support multiple journal devices.

Related branches

tags: added: openstack
Changed in ceph-osd (Juju Charms Collection):
milestone: none → 15.10
assignee: nobody → Peter Sabaini (peter-sabaini)
importance: Undecided → Low
status: New → In Progress
James Page (james-page)
Changed in ceph-osd (Juju Charms Collection):
milestone: 15.10 → 16.01
James Page (james-page)
Changed in ceph-osd (Juju Charms Collection):
milestone: 16.01 → 16.04
James Page (james-page)
Changed in ceph-osd (Juju Charms Collection):
importance: Low → High
James Page (james-page)
Changed in ceph-osd (Juju Charms Collection):
status: In Progress → Fix Committed
Revision history for this message
syndicate604 (w-9buntu-l) wrote :

Hi James is this ready for download yet. I am surprised that more then one Journal disks are not part of JUJU yet. I deployed PISTON CLOUD 2 years ago that could handle this and I have studied CEPH as the core disk system of choice for how best to deploy it. Given your disk system is CRITICAL to your cloud it was obvious to me reading the ceph details that the best way to deploy it was with 2 x Journals per server on SSD, DC quality Intel S3700's. You do not Raid them as it doubles your chance of failure you just put 50% of your OSD on 1 journal and 50% on the other. This means you can only lose 50% of your OSD's max per journal failure on any given server. Each of my CEPH nodes has 2 x S3700's and 6 OSDs. I was wanting to move from Piston now discontinued to JUJU and I see it looks like you just identified this problem now. How soon can I use your patch can I use it right now?

Revision history for this message
Chris Holcombe (xfactor973) wrote :

@syndicate604 this was merged into ceph-osd/next

Revision history for this message
syndicate604 (w-9buntu-l) wrote :

Can I download it now? Or it is months away? I don't mind I have to manually add the files but will it clash with other files?

Revision history for this message
Chris Holcombe (xfactor973) wrote : Re: [Bug 1495878] Re: ceph-osd charm needs support for multiple journals

Yup! It's available here: https://jujucharms.com/u/openstack-charmers-next/

I can't say whether it'll clash with your existing setup.

On 02/29/2016 09:09 AM, syndicate604 wrote:
> Can I download it now? Or it is months away? I don't mind I have to
> manually add the files but will it clash with other files?
>

Revision history for this message
syndicate604 (w-9buntu-l) wrote :

I t says 0 charmers 0 bundles?

Revision history for this message
Chris Holcombe (xfactor973) wrote :

@syndicate604 sorry I was too fast to answer with that last link. Here's where the code lives: https://code.launchpad.net/~openstack-charmers/charms/trusty/ceph-osd/next/ You can easily deploy from a local directory with that if you pull it down. I'm trying to find out where this is being published to in the charmer store for you also in the mean time.

Revision history for this message
Chris Holcombe (xfactor973) wrote :

@syndicate604 the 0 there means 0 deploys. You can still deploy from that. Just click on ceph osd and follow the instructions there. Email me if you run into trouble :)

Revision history for this message
James Page (james-page) wrote :

https://jujucharms.com/u/openstack-charmers-next/ should be OK - I see
published charms for all series of Ubuntu?

On Mon, 29 Feb 2016 at 21:31 Chris Holcombe <email address hidden>
wrote:

> @syndicate604 the 0 there means 0 deploys. You can still deploy from
> that. Just click on ceph osd and follow the instructions there. Email
> me if you run into trouble :)
>
> --
> You received this bug notification because you are a member of OpenStack
> Charmers, which is subscribed to ceph-osd in Juju Charms Collection.
> https://bugs.launchpad.net/bugs/1495878
>
> Title:
> ceph-osd charm needs support for multiple journals
>
> Status in ceph-osd package in Juju Charms Collection:
> Fix Committed
>
> Bug description:
> Currently the ceph-osd charm only supports one dedicated journal
> device per unit. This is a problem for nodes with many disks: a)
> throughput towards a single journal will become a limiting factor, and
> b) since a failing journal will take down all associated OSDs with it,
> many OSDs sharing a single journal will make for a large failure
> domain. It would be desirable to have the ceph-osd charm support
> multiple journal devices.
>
> To manage notifications about this bug go to:
>
> https://bugs.launchpad.net/charms/+source/ceph-osd/+bug/1495878/+subscriptions
>

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Somewhat more specifi8cally, the ceph-osd on /next can be found at:
 https://jujucharms.com/u/openstack-charmers-next/ceph-osd/trusty/17
and deployed into your environment with:
juju deploy cs:~openstack-charmers-next/trusty/ceph-osd-17

On 02/29/2016 04:24 PM, Chris Holcombe wrote:
> @syndicate604 the 0 there means 0 deploys. You can still deploy from
> that. Just click on ceph osd and follow the instructions there. Email
> me if you run into trouble :)
>

James Page (james-page)
Changed in ceph-osd (Juju Charms Collection):
status: Fix Committed → Fix Released
Revision history for this message
james beedy (jamesbeedy) wrote :

@chris.macbaughton @james-page this one never made it upstream. Are there plans to bring this forward still?

For me, my servers have 36 x OSD per server, the single ssd bottleneck and SPOF are an extreme hinderance.

Revision history for this message
james beedy (jamesbeedy) wrote :

@chris.macnaughton *^

Revision history for this message
james beedy (jamesbeedy) wrote :

I see now this can be accomplished using juju storage. Please disregard.

thx

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

@jamesbeedy You should be able to handle it through configuration as well, I believe, in a manner similar to the way osd-devices are handled

Revision history for this message
james beedy (jamesbeedy) wrote :

@chris.macnaughton awesome, so it does! Possibly an update to the config description to detail that it supports multiple journal devices would be helpful.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.