- Sep 20, 2017
-
-
Xav Paice authored
In small clusters, adding OSDs at their full weight causes massive IO workload which makes performance unacceptable. This adds a config option to change the initial weight, we can set it to 0 or something small for clusters that would be affected. Closes-Bug: 1716783 Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
-
- Aug 29, 2017
-
-
Dmitrii Shcherbakov authored
the 'experimental' option is no longer needed as of Luminous release https://github.com/ceph/ceph/blob/luminous/src/common/legacy_config_opts.h#L79 Change-Id: Idbbb69acec92b2f2efca80691ca73a2030bcf633
-
- Aug 21, 2017
-
-
Chris MacNaughton authored
Closes-Bug: #1709962 Closes-Bug: #1710645 Change-Id: I1b6d91f0f09f0142f4470d8ae3eea650165a0575
-
- Aug 14, 2017
-
-
Edward Hope-Morley authored
Also had to fix some imports due to changes implemented as part of the cleanup. Change-Id: Ie232828056a7f15525f820e8e106264b22697168
-
- Jul 07, 2017
-
-
James Page authored
Add highly experimental support for bluestore storage format for OSD devices; this is disabled by default and should only be enabled in deployments where loss of data does not present a problem! Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
-
- Mar 28, 2017
-
-
Billy Olsen authored
Some upgrade scenarios (hammer->jewel) require that the ownership of the ceph osd directories are changed from root:root to ceph:ceph. This patch improves the upgrade experience by upgrading one OSD at a time as opposed to stopping all services, changing file ownership, and then restarting all services at once. This patch makes use of the `setuser match path` directive in the ceph.conf, which causes the ceph daemon to start as the owner of the OSD's root directory. This allows the ceph OSDs to continue running should an unforeseen incident occur as part of this upgrade. Change-Id: I00fdbe0fd113c56209429341f0a10797e5baee5a Closes-Bug: #1662591
-
- Feb 17, 2017
-
-
Chris MacNaughton authored
Only check for upgrade requests if the local unit is installed and bootstrapped, avoiding attempts to upgrade on initial execution of config-changed for trusty UCA pockets. Note that the upgrade process relies on a running ceph cluster. Change-Id: Ic7e427368a373ed853111d837a0223a75b46ce8e Closes-Bug: 1662943
-
- Jan 07, 2017
-
-
James Page authored
Make use of new charms.ceph utils to generalize the upgrade paths for OSD upgrades, ensuring that only supported upgrade paths are undertaken for Ubuntu 16.04 UCA pockets. Partial-Bug: 1611082 Change-Id: Ifbf3a7ffbb5ab17e839099658c7a474784ab4083
-
- Dec 22, 2016
-
-
Billy Olsen authored
This change skips over any devices which does not start with a leading folder separator ('/'). Allowing such entries causes an OSD to be created out of the charm directory. This can be caused by something as innocuous as 2 spaces between devices. The result is that the root device is also running an OSD, which is undesirable. Change-Id: I0b5530dc4ec4306a9efedb090e583fb4e2089749 Closes-Bug: 1652175
-
- Sep 28, 2016
-
-
Chris Holcombe authored
Install apparmor profile for ceph-osd processes, and provide associated configuration option to place any ceph-osd processes into enforce, complain, or disable apparmor profile mode. As this is the first release of this feature, default to disabled and allow charm users to test and provide feedback for this release. Change-Id: I4524c587ac70de13aa3a0cb912033e6eb44b0403
-
- Sep 21, 2016
-
-
Chris Holcombe authored
Moving the ceph mon upgrade code over to the ceph shared library. This will make it easier to make patches and have them be applied to all 3 charms at once. Change-Id: I541269d05e6ff8883233a21c78ebe9df89b9e797
-
James Page authored
Juju 2.0 provides support for display of the version of an application deployed by a charm in juju status. Insert the application_version_set function into the existing assess_status function - this gets called after all hook executions, and periodically after that, so any changes in package versions due to normal system updates will also be reflected in the status output. This review also includes a resync of charm-helpers to pickup hookenv support for this feature. Change-Id: If1ec3dcc5025d1a1f7e64f21481412ad630050ea
-
- Sep 01, 2016
-
-
Chris Holcombe authored
The rolling upgrade code sets keys in the ceph osd cluster to discover whether it can upgrade itself. This patch addresses an issue where the upgrade code was not taking into account multiple upgrades to newer ceph versions in a row. Closes-Bug: 1611719 Change-Id: I467d95f3619b9ad2a9f4f46abee4e02b5d9703da
-
- Aug 11, 2016
-
-
Chris MacNaughton authored
This includes a resync of charms_ceph to raise the directory one level The charms_ceph change that we're syncing in changes the name of the ceph.py file into the __init__.py file to remove the second level of namespacing Change-Id: I4eabbd313de2e9420667dc4acca177b2dbbf9581
-
- Aug 02, 2016
-
-
Chris MacNaughton authored
This change moves our ceph.py into a seperate repository that we can share between various ceph related Juju projects, along with a Makefile change to use a new git_sync file to partially sync a git repository into a specified path Change-Id: Iaf3ea38b6e5268c517d53b36105b70f23de891bb
-
- Jul 22, 2016
-
-
Chris Holcombe authored
This change ensures that when ceph is upgraded from an older version that uses root to a newer version that uses ceph as the process owner that all directories are chowned. Closes-Bug: 1600338 Change-Id: Ifac8cde6e6ea6f3a366fb40b9ffd261036720310
-
- Jul 19, 2016
-
-
Chris Holcombe authored
The pause and resume actions shell out to the ceph command to run OSD operations (in/out). Because the default cephx key given out by the monitor cluster does not contain the correct permissions, these commands fail. Use the osd-upgrade user which has the correct permissions. Closes-Bug: 1602826 Depends-On: I6af43b61149c6eeeeb5c77950701194beda2da71 Change-Id: Ie31bc9048972dbb0986ac8deb5b821a4db5d585f
-
Chris Holcombe authored
Use the osd-upgrade key when replacing OSD's as this key has the correct cephx permissions to perform the operation. Closes-Bug: 1602826 Depends-On: I6af43b61149c6eeeeb5c77950701194beda2da71 Change-Id: I32d2f1a4036e09d5d1fd13009c95ab1514e7304c
-
- Jul 14, 2016
-
-
Chris Holcombe authored
This patch starts down the road to automated performance tuning. It attempts to identify optimal settings for hard drives and network cards and then persist them for reboots. It is conservative but configurable via config.yaml settings. Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
-
- Jun 28, 2016
-
-
James Page authored
All contributions to this charm where made under Canonical copyright; switch to Apache-2.0 license as agreed so we can move forward with official project status. In order to make this change, this commit also drops the inclusion of upstart configurations for very early versions of Ceph (argonaut), as they are no longer required. Change-Id: I9609dd79855b545a2c5adc12b7ac573c6f246d48
-
- Jun 01, 2016
-
-
Edward Hope-Morley authored
Adds a new config-flags option to the charm that supports setting a dictionary of ceph configuration settings that will be applied to ceph.conf. This implementation supports config sections so that settings can be applied to any section supported by the ceph.conf template in the charm. Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735 Closes-Bug: 1522375
-
- Apr 08, 2016
-
-
James Page authored
Juju 2.0 provides support for network spaces, allowing charm authors to support direct binding of relations and extra-bindings onto underlying network spaces. Add public and cluster extra bindings to this charm to support separation of client facing and cluster network traffic using Juju network spaces. Existing network configuration options will still be preferred over any Juju provided network bindings, ensuring that upgrades to existing deployments don't break. Change-Id: I78ab6993ad5bd324ea52e279c6ca2630f965544c
-
Alex Kavanagh authored
This changeset provides pause and resume actions to the ceph charm. The pause action issues a 'ceph osd out <local_id>' for each of the ceph osd ids that are on the unit. The action does not stop the ceph osd processes. Note that if the pause-health action is NOT used on the ceph-mon charm then the cluster will start trying to rebalance the PGs accross the remaining OSDs. If the cluster might reach its 'full ratio' then this will be a breaking action. The charm does NOT check for this eventuality. The resume action issues a 'ceph osd in <local_id>' for each of the local ceph osd process on the unit. The charm 'remembers' that a pause action was issued, and if successful, it shows a 'maintenance' workload status as a reminder. Change-Id: I9f53c9c6c4bb737670ffcd542acec0b320cc7f6a
-
- Mar 31, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. This reverts commit db09fdce. Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
-
- Mar 25, 2016
-
-
Chris Holcombe authored
This reverts commit 5b2cebfd. Change-Id: Ic6f371fcc2879886b705fdce4d59bc99e41eea89
-
- Mar 24, 2016
-
-
Edward Hope-Morley authored
Add charmhelpers.contrib.hardening and calls to install, config-changed, upgrade-charm and update-status hooks. Also add new config option to allow one or more hardening modules to be applied at runtime. Change-Id: Ic417d678d3b0f7bfda5b393628a67297d7e79107
-
- Mar 23, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
-
- Mar 17, 2016
-
-
Chris Holcombe authored
This patch adds an action to replace a hard drive for an particular osd server. The user executing the action will give the OSD number and also the device name of the replacement drive. The rest is taken care of by the action. The action will attempt to go through all the osd removal steps for the failed drive. It will force unmount the drive and if that fails it will lazy unmount the drive. This force and then lazy pattern comes from experience with dead hard drives not behaving nicely with umount. Change-Id: I914cd484280ac3f9b9f1fad8b35ee53e92438a0a
-
- Feb 02, 2016
-
-
Bjorn Tillenius authored
-
Bjorn Tillenius authored
-
- Oct 08, 2015
-
-
James Page authored
-
- Oct 06, 2015
-
-
James Page authored
-
James Page authored
-