Skip to content
Snippets Groups Projects
  1. Apr 08, 2016
    • James Page's avatar
      Add support for Juju network spaces · afe7651e
      James Page authored
      Juju 2.0 provides support for network spaces, allowing
      charm authors to support direct binding of relations and
      extra-bindings onto underlying network spaces.
      
      Add public and cluster extra bindings to this charm to
      support separation of client facing and cluster network
      traffic using Juju network spaces.
      
      Existing network configuration options will still be
      preferred over any Juju provided network bindings, ensuring
      that upgrades to existing deployments don't break.
      
      Change-Id: I78ab6993ad5bd324ea52e279c6ca2630f965544c
      afe7651e
    • Alex Kavanagh's avatar
      Pause/resume for ceph-osd charm · bbfdeb84
      Alex Kavanagh authored
      This changeset provides pause and resume actions to the ceph charm.
      The pause action issues a 'ceph osd out <local_id>' for each of the
      ceph osd ids that are on the unit.  The action does not stop the
      ceph osd processes.
      
      Note that if the pause-health action is NOT used on the ceph-mon
      charm then the cluster will start trying to rebalance the PGs accross
      the remaining OSDs.  If the cluster might reach its 'full ratio' then
      this will be a breaking action.  The charm does NOT check for this
      eventuality.
      
      The resume action issues a 'ceph osd in <local_id>' for each of the
      local ceph osd process on the unit.
      
      The charm 'remembers' that a pause action was issued, and if
      successful, it shows a 'maintenance' workload status as a reminder.
      
      Change-Id: I9f53c9c6c4bb737670ffcd542acec0b320cc7f6a
      bbfdeb84
  2. Apr 07, 2016
    • James Page's avatar
      Check for Keystone apache2 process for liberty+ · 72b8ecad
      James Page authored
      The keystone charm recently changed to run keystone as a wsgi
      process under Apache2; refactor amulet test to ensure that
      apache2 is checked instead of keystone for >= liberty.
      
      Change-Id: Ide7c6e6349b80662677c6d9f3ef3e84b09b18b9b
      72b8ecad
  3. Mar 31, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 4285f14a
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      This reverts commit db09fdce.
      
      Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
      4285f14a
  4. Mar 30, 2016
  5. Mar 29, 2016
  6. Mar 28, 2016
    • Chris MacNaughton's avatar
      Update ceph_encryption amulet test to raise on failure · 21929d73
      Chris MacNaughton authored
      Currently, when this test should fail, it just returns
      false, when it should amulet.raise_status so that the
      test gets marked as failed. For Mitaka, we are currently
      skipping the encryption test as the Ceph charm cannot
      currently deploy encryption on Infernalis
      
      Change-Id: I6a15b2d2560a5dffb9a77a8e5965613a8d3f6aac
      21929d73
  7. Mar 25, 2016
  8. Mar 24, 2016
  9. Mar 23, 2016
    • Chris Holcombe's avatar
      Rolling upgrades of ceph osd cluster · 5b2cebfd
      Chris Holcombe authored
      This change adds functionality to allow the ceph osd cluster to
      upgrade in a serial rolled fashion.  This will use the ceph monitor
      cluster to lock and allows only 1 ceph osd server at a time to upgrade.
      The upgrade is initiated setting a config value for source for the
      service which will prompt the osd cluster to upgrade to that new
      source and restart all osds processes server by server.  If an osd
      server has been waiting on a previous server for more than 10 minutes
      and hasn't seen it finish it will assume it died during the upgrade
      and proceed with its own upgrade.
      
      I had to modify the amulet test slightly to use the ceph-mon charm
      instead of the default ceph charm.  I also changed the test so that
      it uses 3 ceph-osd servers instead of 1.
      
      Limtations of this patch: If the osd failure domain has been set to osd
      than this patch will cause brief temporary outages while osd processes
      are being restarted.  Future work will handle this case.
      
      Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
      5b2cebfd
    • James Page's avatar
      Update to charm-tools >= 2.0.0 · ba6397ca
      James Page authored
      The new release of charm-tools no longer ships the charm
      command; update minimum version requirement and switch
      to using charm-proof instead, unblocking current pep8
      failures across all charms.
      
      Also pin the version of requests to 2.6.0 until theblues
      (indirect dependency of charm-tools) sort out its
      requirements versioning.
      
      Change-Id: I86b9094501dc1101bcad7038acd92f89ac71c95c
      ba6397ca
  10. Mar 21, 2016
  11. Mar 18, 2016
    • Chris MacNaughton's avatar
      add juju availability zone to ceph osd location when present · c94e0b4b
      Chris MacNaughton authored
      The approach here is to use the availability zone as an imaginary rack.
      All hosts that are in the same AZ will be in the same imaginary rack.
      From Ceph's perspective this doesn't matter as it's just a bucket after all.
      This will give users the ability to further customize their ceph deployment.
      
      Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
      c94e0b4b
  12. Mar 17, 2016
    • Chris Holcombe's avatar
      Add support for replacing a failed OSD drive · a8790f23
      Chris Holcombe authored
      This patch adds an action to replace a hard drive for an particular
      osd server.  The user executing the action will give the OSD number
      and also the device name of the replacement drive.  The rest is
      taken care of by the action. The action will attempt to go through
      all the osd removal steps for the failed drive.  It will force
      unmount the drive and if that fails it will lazy unmount the drive.
      This force and then lazy pattern comes from experience with dead
      hard drives not behaving nicely with umount.
      
      Change-Id: I914cd484280ac3f9b9f1fad8b35ee53e92438a0a
      a8790f23
  13. Mar 16, 2016
    • Ryan Beisner's avatar
      Update amulet test to include a non-existent osd-devices value · 0cb5cd7f
      Ryan Beisner authored
      The osd-devices charm config option is a whitelist, and the
      charm needs to gracefully handle items in that whitelist which
      may not exist.
      
      Change-Id: Iea212ef0e0987767e0e666ee2e30a59d4bef189a
      0cb5cd7f
    • Billy Olsen's avatar
      Use tox in Makefile targets · 5eb7fb5b
      Billy Olsen authored
      Modify the Makefile to point at the appropriate tox targets
      so that tox and Make output can be equivalent. This involves
      mapping the lint target to the pep8 target and the test target
      to the py27 target.
      
      Change-Id: I7216b8338ca3f548b6b373821d2bf9a4dca37286
      5eb7fb5b
  14. Mar 04, 2016
  15. Mar 03, 2016
  16. Mar 02, 2016
  17. Feb 29, 2016
  18. Feb 25, 2016
  19. Feb 24, 2016
  20. Feb 23, 2016
  21. Feb 22, 2016
  22. Feb 18, 2016
  23. Feb 17, 2016
  24. Feb 16, 2016
  25. Feb 12, 2016
  26. Feb 10, 2016
  27. Feb 02, 2016
  28. Jan 30, 2016
  29. Jan 29, 2016
Loading