Skip to content
Snippets Groups Projects
  1. Sep 20, 2017
    • Xav Paice's avatar
      Add option for OSD initial weight · ef3c3c7a
      Xav Paice authored
      In small clusters, adding OSDs at their full weight causes massive IO
      workload which makes performance unacceptable.  This adds a config
      option to change the initial weight, we can set it to 0 or something
      small for clusters that would be affected.
      
      Closes-Bug: 1716783
      Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
      ef3c3c7a
  2. Aug 29, 2017
  3. Aug 21, 2017
  4. Aug 14, 2017
  5. Jul 07, 2017
    • James Page's avatar
      Add bluestore support for OSD's · ca8a5c33
      James Page authored
      Add highly experimental support for bluestore storage format for
      OSD devices; this is disabled by default and should only be enabled
      in deployments where loss of data does not present a problem!
      
      Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e
      Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
      ca8a5c33
  6. Mar 28, 2017
    • Billy Olsen's avatar
      Upgrade OSDs one at a time when changing ownership · 2c5406b6
      Billy Olsen authored
      Some upgrade scenarios (hammer->jewel) require that the ownership
      of the ceph osd directories are changed from root:root to ceph:ceph.
      This patch improves the upgrade experience by upgrading one OSD at
      a time as opposed to stopping all services, changing file ownership,
      and then restarting all services at once.
      
      This patch makes use of the `setuser match path` directive in the
      ceph.conf, which causes the ceph daemon to start as the owner of the
      OSD's root directory. This allows the ceph OSDs to continue running
      should an unforeseen incident occur as part of this upgrade.
      
      Change-Id: I00fdbe0fd113c56209429341f0a10797e5baee5a
      Closes-Bug: #1662591
      2c5406b6
  7. Feb 17, 2017
    • Chris MacNaughton's avatar
      Only check for upgrades if bootstrapped · 9a5a710a
      Chris MacNaughton authored
      Only check for upgrade requests if the local unit is installed
      and bootstrapped, avoiding attempts to upgrade on initial
      execution of config-changed for trusty UCA pockets.
      
      Note that the upgrade process relies on a running ceph cluster.
      
      Change-Id: Ic7e427368a373ed853111d837a0223a75b46ce8e
      Closes-Bug: 1662943
      9a5a710a
  8. Jan 07, 2017
    • James Page's avatar
      Generalize upgrade paths for osds · a60775be
      James Page authored
      Make use of new charms.ceph utils to generalize the upgrade
      paths for OSD upgrades, ensuring that only supported upgrade
      paths are undertaken for Ubuntu 16.04 UCA pockets.
      
      Partial-Bug: 1611082
      
      Change-Id: Ifbf3a7ffbb5ab17e839099658c7a474784ab4083
      a60775be
  9. Dec 22, 2016
    • Billy Olsen's avatar
      Skip osd-devices not absolute paths · 665ea2b6
      Billy Olsen authored
      This change skips over any devices which does not start with a leading
      folder separator ('/'). Allowing such entries causes an OSD to be
      created out of the charm directory. This can be caused by something as
      innocuous as 2 spaces between devices. The result is that the root
      device is also running an OSD, which is undesirable.
      
      Change-Id: I0b5530dc4ec4306a9efedb090e583fb4e2089749
      Closes-Bug: 1652175
      665ea2b6
  10. Sep 28, 2016
    • Chris Holcombe's avatar
      Add support for apparmor security profiles · 7d42f6e0
      Chris Holcombe authored
      Install apparmor profile for ceph-osd processes, and provide
      associated configuration option to place any ceph-osd processes
      into enforce, complain, or disable apparmor profile mode.
      
      As this is the first release of this feature, default to disabled
      and allow charm users to test and provide feedback for this
      release.
      
      Change-Id: I4524c587ac70de13aa3a0cb912033e6eb44b0403
      7d42f6e0
  11. Sep 21, 2016
    • Chris Holcombe's avatar
      Move upgrade code to shared lib · 801f8538
      Chris Holcombe authored
      Moving the ceph mon upgrade code over to the
      ceph shared library. This will make it easier
      to make patches and have them be applied to all
      3 charms at once.
      
      Change-Id: I541269d05e6ff8883233a21c78ebe9df89b9e797
      801f8538
    • James Page's avatar
      Add support for application version · 5e506b8c
      James Page authored
      Juju 2.0 provides support for display of the version of
      an application deployed by a charm in juju status.
      
      Insert the application_version_set function into the
      existing assess_status function - this gets called after
      all hook executions, and periodically after that, so any
      changes in package versions due to normal system updates
      will also be reflected in the status output.
      
      This review also includes a resync of charm-helpers to
      pickup hookenv support for this feature.
      
      Change-Id: If1ec3dcc5025d1a1f7e64f21481412ad630050ea
      5e506b8c
  12. Sep 01, 2016
    • Chris Holcombe's avatar
      Allow multiple rolling upgrades · 87672f47
      Chris Holcombe authored
      The rolling upgrade code sets keys in the ceph osd
      cluster to discover whether it can upgrade itself. This
      patch addresses an issue where the upgrade code was not
      taking into account multiple upgrades to newer ceph versions
      in a row.
      
      Closes-Bug: 1611719
      Change-Id: I467d95f3619b9ad2a9f4f46abee4e02b5d9703da
      87672f47
  13. Aug 11, 2016
    • Chris MacNaughton's avatar
      Clean up dependency chain · 69b821d3
      Chris MacNaughton authored
      This includes a resync of charms_ceph to raise the directory one level
      The charms_ceph change that we're syncing in changes the
      name of the ceph.py file into the __init__.py file to remove the
      second level of namespacing
      
      Change-Id: I4eabbd313de2e9420667dc4acca177b2dbbf9581
      69b821d3
  14. Aug 02, 2016
    • Chris MacNaughton's avatar
      Migrate to shared lib · f9993191
      Chris MacNaughton authored
      This change moves our ceph.py into
      a seperate repository that we can share between various
      ceph related Juju projects, along with a Makefile
      change to use a new git_sync file to partially sync
      a git repository into a specified path
      
      Change-Id: Iaf3ea38b6e5268c517d53b36105b70f23de891bb
      f9993191
  15. Jul 22, 2016
    • Chris Holcombe's avatar
      Fix directory ownership as part of upgrade · 3e465ba4
      Chris Holcombe authored
      This change ensures that when ceph is upgraded from an
      older version that uses root to a newer version that
      uses ceph as the process owner that all directories
      are chowned.
      
      Closes-Bug: 1600338
      Change-Id: Ifac8cde6e6ea6f3a366fb40b9ffd261036720310
      3e465ba4
  16. Jul 19, 2016
    • Chris Holcombe's avatar
      Use osd-upgrade user for pause/resume · 3ab6133c
      Chris Holcombe authored
      The pause and resume actions shell out to the ceph command to run
      OSD operations (in/out).
      
      Because the default cephx key given out by the monitor cluster does
      not contain the correct permissions, these commands fail.
      
      Use the osd-upgrade user which has the correct permissions.
      
      Closes-Bug: 1602826
      
      Depends-On: I6af43b61149c6eeeeb5c77950701194beda2da71
      Change-Id: Ie31bc9048972dbb0986ac8deb5b821a4db5d585f
      3ab6133c
    • Chris Holcombe's avatar
      Fix OSD replacement · 25a988b9
      Chris Holcombe authored
      Use the osd-upgrade key when replacing OSD's as this key
      has the correct cephx permissions to perform the operation.
      
      Closes-Bug: 1602826
      
      Depends-On: I6af43b61149c6eeeeb5c77950701194beda2da71
      Change-Id: I32d2f1a4036e09d5d1fd13009c95ab1514e7304c
      25a988b9
  17. Jul 14, 2016
    • Chris Holcombe's avatar
      Perf Optimizations · 79c6c286
      Chris Holcombe authored
      This patch starts down the road to automated performance
      tuning.  It attempts to identify optimal settings for
      hard drives and network cards and then persist them
      for reboots.  It is conservative but configurable via
      config.yaml settings.
      
      Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
      79c6c286
  18. Jun 28, 2016
    • James Page's avatar
      Re-license charm as Apache-2.0 · c32211c8
      James Page authored
      All contributions to this charm where made under Canonical
      copyright; switch to Apache-2.0 license as agreed so we
      can move forward with official project status.
      
      In order to make this change, this commit also drops the
      inclusion of upstart configurations for very early versions
      of Ceph (argonaut), as they are no longer required.
      
      Change-Id: I9609dd79855b545a2c5adc12b7ac573c6f246d48
      c32211c8
  19. Jun 01, 2016
    • Edward Hope-Morley's avatar
      Add support for user-provided ceph config · 8f0347d6
      Edward Hope-Morley authored
      Adds a new config-flags option to the charm that
      supports setting a dictionary of ceph configuration
      settings that will be applied to ceph.conf.
      
      This implementation supports config sections so that
      settings can be applied to any section supported by
      the ceph.conf template in the charm.
      
      Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735
      Closes-Bug: 1522375
      8f0347d6
  20. Apr 08, 2016
    • James Page's avatar
      Add support for Juju network spaces · afe7651e
      James Page authored
      Juju 2.0 provides support for network spaces, allowing
      charm authors to support direct binding of relations and
      extra-bindings onto underlying network spaces.
      
      Add public and cluster extra bindings to this charm to
      support separation of client facing and cluster network
      traffic using Juju network spaces.
      
      Existing network configuration options will still be
      preferred over any Juju provided network bindings, ensuring
      that upgrades to existing deployments don't break.
      
      Change-Id: I78ab6993ad5bd324ea52e279c6ca2630f965544c
      afe7651e
    • Alex Kavanagh's avatar
      Pause/resume for ceph-osd charm · bbfdeb84
      Alex Kavanagh authored
      This changeset provides pause and resume actions to the ceph charm.
      The pause action issues a 'ceph osd out <local_id>' for each of the
      ceph osd ids that are on the unit.  The action does not stop the
      ceph osd processes.
      
      Note that if the pause-health action is NOT used on the ceph-mon
      charm then the cluster will start trying to rebalance the PGs accross
      the remaining OSDs.  If the cluster might reach its 'full ratio' then
      this will be a breaking action.  The charm does NOT check for this
      eventuality.
      
      The resume action issues a 'ceph osd in <local_id>' for each of the
      local ceph osd process on the unit.
      
      The charm 'remembers' that a pause action was issued, and if
      successful, it shows a 'maintenance' workload status as a reminder.
      
      Change-Id: I9f53c9c6c4bb737670ffcd542acec0b320cc7f6a
      bbfdeb84
  21. Mar 31, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 4285f14a
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      This reverts commit db09fdce.
      
      Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
      4285f14a
  22. Mar 25, 2016
  23. Mar 24, 2016
    • Edward Hope-Morley's avatar
      Add hardening support · 62cc6145
      Edward Hope-Morley authored
      Add charmhelpers.contrib.hardening and calls to install,
      config-changed, upgrade-charm and update-status hooks. Also
      add new config option to allow one or more hardening
      modules to be applied at runtime.
      
      Change-Id: Ic417d678d3b0f7bfda5b393628a67297d7e79107
      62cc6145
  24. Mar 23, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 5b2cebfd
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
      5b2cebfd
  25. Mar 17, 2016
    • Chris Holcombe's avatar
      Add support for replacing a failed OSD drive · a8790f23
      Chris Holcombe authored
      This patch adds an action to replace a hard drive for an particular
      osd server.  The user executing the action will give the OSD number
      and also the device name of the replacement drive.  The rest is
      taken care of by the action. The action will attempt to go through
      all the osd removal steps for the failed drive.  It will force
      unmount the drive and if that fails it will lazy unmount the drive.
      This force and then lazy pattern comes from experience with dead
      hard drives not behaving nicely with umount.
      
      Change-Id: I914cd484280ac3f9b9f1fad8b35ee53e92438a0a
      a8790f23
  26. Feb 02, 2016
  27. Oct 08, 2015
  28. Oct 06, 2015
Loading