- Apr 08, 2016
-
-
James Page authored
Juju 2.0 provides support for network spaces, allowing charm authors to support direct binding of relations and extra-bindings onto underlying network spaces. Add public and cluster extra bindings to this charm to support separation of client facing and cluster network traffic using Juju network spaces. Existing network configuration options will still be preferred over any Juju provided network bindings, ensuring that upgrades to existing deployments don't break. Change-Id: I78ab6993ad5bd324ea52e279c6ca2630f965544c
-
Alex Kavanagh authored
This changeset provides pause and resume actions to the ceph charm. The pause action issues a 'ceph osd out <local_id>' for each of the ceph osd ids that are on the unit. The action does not stop the ceph osd processes. Note that if the pause-health action is NOT used on the ceph-mon charm then the cluster will start trying to rebalance the PGs accross the remaining OSDs. If the cluster might reach its 'full ratio' then this will be a breaking action. The charm does NOT check for this eventuality. The resume action issues a 'ceph osd in <local_id>' for each of the local ceph osd process on the unit. The charm 'remembers' that a pause action was issued, and if successful, it shows a 'maintenance' workload status as a reminder. Change-Id: I9f53c9c6c4bb737670ffcd542acec0b320cc7f6a
-
- Apr 07, 2016
-
-
James Page authored
The keystone charm recently changed to run keystone as a wsgi process under Apache2; refactor amulet test to ensure that apache2 is checked instead of keystone for >= liberty. Change-Id: Ide7c6e6349b80662677c6d9f3ef3e84b09b18b9b
-
- Mar 31, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. This reverts commit db09fdce. Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
-
- Mar 30, 2016
-
-
Jenkins authored
-
- Mar 29, 2016
-
-
Jenkins authored
-
- Mar 28, 2016
-
-
Chris MacNaughton authored
Currently, when this test should fail, it just returns false, when it should amulet.raise_status so that the test gets marked as failed. For Mitaka, we are currently skipping the encryption test as the Ceph charm cannot currently deploy encryption on Infernalis Change-Id: I6a15b2d2560a5dffb9a77a8e5965613a8d3f6aac
-
- Mar 25, 2016
-
-
Jenkins authored
-
Chris Holcombe authored
This reverts commit 5b2cebfd. Change-Id: Ic6f371fcc2879886b705fdce4d59bc99e41eea89
-
- Mar 24, 2016
-
-
Jenkins authored
-
Edward Hope-Morley authored
Add charmhelpers.contrib.hardening and calls to install, config-changed, upgrade-charm and update-status hooks. Also add new config option to allow one or more hardening modules to be applied at runtime. Change-Id: Ic417d678d3b0f7bfda5b393628a67297d7e79107
-
- Mar 23, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
-
James Page authored
The new release of charm-tools no longer ships the charm command; update minimum version requirement and switch to using charm-proof instead, unblocking current pep8 failures across all charms. Also pin the version of requests to 2.6.0 until theblues (indirect dependency of charm-tools) sort out its requirements versioning. Change-Id: I86b9094501dc1101bcad7038acd92f89ac71c95c
-
- Mar 21, 2016
- Mar 18, 2016
-
-
Chris MacNaughton authored
The approach here is to use the availability zone as an imaginary rack. All hosts that are in the same AZ will be in the same imaginary rack. From Ceph's perspective this doesn't matter as it's just a bucket after all. This will give users the ability to further customize their ceph deployment. Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
-
- Mar 17, 2016
-
-
Chris Holcombe authored
This patch adds an action to replace a hard drive for an particular osd server. The user executing the action will give the OSD number and also the device name of the replacement drive. The rest is taken care of by the action. The action will attempt to go through all the osd removal steps for the failed drive. It will force unmount the drive and if that fails it will lazy unmount the drive. This force and then lazy pattern comes from experience with dead hard drives not behaving nicely with umount. Change-Id: I914cd484280ac3f9b9f1fad8b35ee53e92438a0a
-
- Mar 16, 2016
-
-
Ryan Beisner authored
The osd-devices charm config option is a whitelist, and the charm needs to gracefully handle items in that whitelist which may not exist. Change-Id: Iea212ef0e0987767e0e666ee2e30a59d4bef189a
-
Billy Olsen authored
Modify the Makefile to point at the appropriate tox targets so that tox and Make output can be equivalent. This involves mapping the lint target to the pep8 target and the test target to the py27 target. Change-Id: I7216b8338ca3f548b6b373821d2bf9a4dca37286
-
- Mar 04, 2016
-
-
Jenkins authored
-
- Mar 03, 2016
-
-
Chris MacNaughton authored
Tests now verify that ceph osds are running to ensure they pass in either order Change-Id: Ia543f4b085d4e97976ba08db508761f8dde97c42
-
- Mar 02, 2016
-
-
James Page authored
Change-Id: Ibbfcc6d2f0086ee9baf347ccfdf0344ed9c0fb82
-
- Feb 29, 2016
-
-
uoscibot authored
-
- Feb 25, 2016
-
-
James Page authored
-
- Feb 24, 2016
-
-
James Page authored
-
- Feb 23, 2016
-
-
James Page authored
-
- Feb 22, 2016
-
-
Edward Hope-Morley authored
-
Liam Young authored
-
- Feb 18, 2016
-
-
Edward Hope-Morley authored
-
Edward Hope-Morley authored
-
- Feb 17, 2016
-
-
James Page authored
-
- Feb 16, 2016
-
-
James Page authored
-
- Feb 12, 2016
-
-
Ryan Beisner authored
-
James Page authored
-
- Feb 10, 2016
-
-
Edward Hope-Morley authored
Support multiple l3 segments. Closes-Bug: 1523871
-
- Feb 02, 2016
-
-
Bjorn Tillenius authored
-
Bjorn Tillenius authored
-
- Jan 30, 2016
-
-
James Page authored
-
- Jan 29, 2016
-
-
James Page authored
-
James Page authored
-