Skip to content
Snippets Groups Projects
  1. Sep 20, 2017
    • Xav Paice's avatar
      Add option for OSD initial weight · ef3c3c7a
      Xav Paice authored
      In small clusters, adding OSDs at their full weight causes massive IO
      workload which makes performance unacceptable.  This adds a config
      option to change the initial weight, we can set it to 0 or something
      small for clusters that would be affected.
      
      Closes-Bug: 1716783
      Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
      ef3c3c7a
  2. Sep 14, 2017
    • James Page's avatar
      Drop configuration for global keyring · 14f50338
      James Page authored
      Drop explicit global configuration of keyring, supporting
      installation of the ceph/ceph-mon/ceph-osd charms in the
      same machine.
      
      Change-Id: Ib4afd01fbcc4478ce90de5bd464b7829ecc5da7e
      Closes-Bug: 1681750
      14f50338
  3. Aug 29, 2017
  4. Jul 07, 2017
    • James Page's avatar
      Add bluestore support for OSD's · ca8a5c33
      James Page authored
      Add highly experimental support for bluestore storage format for
      OSD devices; this is disabled by default and should only be enabled
      in deployments where loss of data does not present a problem!
      
      Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e
      Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
      ca8a5c33
  5. Mar 28, 2017
    • Billy Olsen's avatar
      Upgrade OSDs one at a time when changing ownership · 2c5406b6
      Billy Olsen authored
      Some upgrade scenarios (hammer->jewel) require that the ownership
      of the ceph osd directories are changed from root:root to ceph:ceph.
      This patch improves the upgrade experience by upgrading one OSD at
      a time as opposed to stopping all services, changing file ownership,
      and then restarting all services at once.
      
      This patch makes use of the `setuser match path` directive in the
      ceph.conf, which causes the ceph daemon to start as the owner of the
      OSD's root directory. This allows the ceph OSDs to continue running
      should an unforeseen incident occur as part of this upgrade.
      
      Change-Id: I00fdbe0fd113c56209429341f0a10797e5baee5a
      Closes-Bug: #1662591
      2c5406b6
  6. Jul 14, 2016
    • Chris Holcombe's avatar
      Perf Optimizations · 79c6c286
      Chris Holcombe authored
      This patch starts down the road to automated performance
      tuning.  It attempts to identify optimal settings for
      hard drives and network cards and then persist them
      for reboots.  It is conservative but configurable via
      config.yaml settings.
      
      Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
      79c6c286
  7. Jun 01, 2016
    • Edward Hope-Morley's avatar
      Add support for user-provided ceph config · 8f0347d6
      Edward Hope-Morley authored
      Adds a new config-flags option to the charm that
      supports setting a dictionary of ceph configuration
      settings that will be applied to ceph.conf.
      
      This implementation supports config sections so that
      settings can be applied to any section supported by
      the ceph.conf template in the charm.
      
      Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735
      Closes-Bug: 1522375
      8f0347d6
  8. May 17, 2016
    • Chris MacNaughton's avatar
      Fix Availability Zone support to not break when not set · 20c89687
      Chris MacNaughton authored
      In addition to ensuring that we have AZ set, we ned to ensure that
      the user has asked to have the crush map customized, ensuring
      that uysing the availability zone features are entirely opt-in
      
      Change-Id: Ie13f50d4d084317199813d417a8de6dab25d340d
      Closes-Bug: 1582274
      20c89687
    • James Page's avatar
      Limit OSD object name lengths for Jewel + ext4 · 53d09832
      James Page authored
      As of the Ceph Jewel release, certain limitations apply to
      OSD object name lengths: specifically if ext4 is in use for
      block devices or a directory based OSD is configured, OSD's
      must be configured to limit object name length:
      
        osd max object name len = 256
        osd max object namespace len = 64
      
      This may cause problems storing objects with long names via
      the ceph-radosgw charm or for direct users of RADOS.
      
      Also ensure that ceph.conf as a final newline as ceph requires
      this.
      
      Change-Id: I26f1d8a6f9560b307929f294d2d637c92986cf41
      Closes-Bug: 1580320
      Closes-Bug: 1578403
      53d09832
  9. Apr 20, 2016
    • Chris Holcombe's avatar
      Revert "add juju availability zone to ceph osd location when present" · f1fc2257
      Chris Holcombe authored
      This reverts commit c94e0b4b.
      
      Support for juju provided zones was broken on older Ceph releases
      where MAAS zones are not configured (i.e. nothing other than the
      default zone).
      
      Backing this change out until we can provide a more complete and
      backwards compatible solution.
      
      Closes-Bug: 1570960
      
      Change-Id: I889d556d180d47b54af2991a65efcca09d685332
      f1fc2257
  10. Mar 31, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 4285f14a
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      This reverts commit db09fdce.
      
      Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
      4285f14a
  11. Mar 25, 2016
  12. Mar 23, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 5b2cebfd
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
      5b2cebfd
  13. Mar 18, 2016
    • Chris MacNaughton's avatar
      add juju availability zone to ceph osd location when present · c94e0b4b
      Chris MacNaughton authored
      The approach here is to use the availability zone as an imaginary rack.
      All hosts that are in the same AZ will be in the same imaginary rack.
      From Ceph's perspective this doesn't matter as it's just a bucket after all.
      This will give users the ability to further customize their ceph deployment.
      
      Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
      c94e0b4b
  14. Jan 18, 2016
  15. Jan 13, 2016
  16. Sep 29, 2014
  17. Sep 27, 2014
  18. Sep 24, 2014
  19. Sep 18, 2014
  20. Jul 23, 2014
  21. Jun 06, 2014
  22. Mar 25, 2014
  23. Dec 12, 2013
  24. Dec 17, 2012
  25. Oct 19, 2012
  26. Oct 09, 2012
  27. Oct 08, 2012
Loading