- Sep 20, 2017
-
-
Xav Paice authored
In small clusters, adding OSDs at their full weight causes massive IO workload which makes performance unacceptable. This adds a config option to change the initial weight, we can set it to 0 or something small for clusters that would be affected. Closes-Bug: 1716783 Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
-
- Sep 14, 2017
-
-
James Page authored
Drop explicit global configuration of keyring, supporting installation of the ceph/ceph-mon/ceph-osd charms in the same machine. Change-Id: Ib4afd01fbcc4478ce90de5bd464b7829ecc5da7e Closes-Bug: 1681750
-
- Aug 29, 2017
-
-
Dmitrii Shcherbakov authored
the 'experimental' option is no longer needed as of Luminous release https://github.com/ceph/ceph/blob/luminous/src/common/legacy_config_opts.h#L79 Change-Id: Idbbb69acec92b2f2efca80691ca73a2030bcf633
-
- Jul 07, 2017
-
-
James Page authored
Add highly experimental support for bluestore storage format for OSD devices; this is disabled by default and should only be enabled in deployments where loss of data does not present a problem! Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
-
- Mar 28, 2017
-
-
Billy Olsen authored
Some upgrade scenarios (hammer->jewel) require that the ownership of the ceph osd directories are changed from root:root to ceph:ceph. This patch improves the upgrade experience by upgrading one OSD at a time as opposed to stopping all services, changing file ownership, and then restarting all services at once. This patch makes use of the `setuser match path` directive in the ceph.conf, which causes the ceph daemon to start as the owner of the OSD's root directory. This allows the ceph OSDs to continue running should an unforeseen incident occur as part of this upgrade. Change-Id: I00fdbe0fd113c56209429341f0a10797e5baee5a Closes-Bug: #1662591
-
- Jul 14, 2016
-
-
Chris Holcombe authored
This patch starts down the road to automated performance tuning. It attempts to identify optimal settings for hard drives and network cards and then persist them for reboots. It is conservative but configurable via config.yaml settings. Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
-
- Jun 01, 2016
-
-
Edward Hope-Morley authored
Adds a new config-flags option to the charm that supports setting a dictionary of ceph configuration settings that will be applied to ceph.conf. This implementation supports config sections so that settings can be applied to any section supported by the ceph.conf template in the charm. Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735 Closes-Bug: 1522375
-
- May 17, 2016
-
-
Chris MacNaughton authored
In addition to ensuring that we have AZ set, we ned to ensure that the user has asked to have the crush map customized, ensuring that uysing the availability zone features are entirely opt-in Change-Id: Ie13f50d4d084317199813d417a8de6dab25d340d Closes-Bug: 1582274
-
James Page authored
As of the Ceph Jewel release, certain limitations apply to OSD object name lengths: specifically if ext4 is in use for block devices or a directory based OSD is configured, OSD's must be configured to limit object name length: osd max object name len = 256 osd max object namespace len = 64 This may cause problems storing objects with long names via the ceph-radosgw charm or for direct users of RADOS. Also ensure that ceph.conf as a final newline as ceph requires this. Change-Id: I26f1d8a6f9560b307929f294d2d637c92986cf41 Closes-Bug: 1580320 Closes-Bug: 1578403
-
- Apr 20, 2016
-
-
Chris Holcombe authored
This reverts commit c94e0b4b. Support for juju provided zones was broken on older Ceph releases where MAAS zones are not configured (i.e. nothing other than the default zone). Backing this change out until we can provide a more complete and backwards compatible solution. Closes-Bug: 1570960 Change-Id: I889d556d180d47b54af2991a65efcca09d685332
-
- Mar 31, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. This reverts commit db09fdce. Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
-
- Mar 25, 2016
-
-
Chris Holcombe authored
This reverts commit 5b2cebfd. Change-Id: Ic6f371fcc2879886b705fdce4d59bc99e41eea89
-
- Mar 23, 2016
-
-
Chris Holcombe authored
This change adds functionality to allow the ceph osd cluster to upgrade in a serial rolled fashion. This will use the ceph monitor cluster to lock and allows only 1 ceph osd server at a time to upgrade. The upgrade is initiated setting a config value for source for the service which will prompt the osd cluster to upgrade to that new source and restart all osds processes server by server. If an osd server has been waiting on a previous server for more than 10 minutes and hasn't seen it finish it will assume it died during the upgrade and proceed with its own upgrade. I had to modify the amulet test slightly to use the ceph-mon charm instead of the default ceph charm. I also changed the test so that it uses 3 ceph-osd servers instead of 1. Limtations of this patch: If the osd failure domain has been set to osd than this patch will cause brief temporary outages while osd processes are being restarted. Future work will handle this case. Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
-
- Mar 18, 2016
-
-
Chris MacNaughton authored
The approach here is to use the availability zone as an imaginary rack. All hosts that are in the same AZ will be in the same imaginary rack. From Ceph's perspective this doesn't matter as it's just a bucket after all. This will give users the ability to further customize their ceph deployment. Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
-
- Jan 18, 2016
-
-
James Page authored
-
- Jan 13, 2016
-
-
Edward Hope-Morley authored
Add loglevel config option. Closes-Bug: 1520236
-
- Sep 29, 2014
-
-
Edward Hope-Morley authored
-
- Sep 27, 2014
-
-
Corey Bryant authored
(ConfigParser can't parse)
-
- Sep 24, 2014
-
-
Edward Hope-Morley authored
-
Edward Hope-Morley authored
-
Edward Hope-Morley authored
-
- Sep 18, 2014
-
-
Edward Hope-Morley authored
Adds IPv6 support for ceph osd.
-
- Jul 23, 2014
-
-
James Page authored
-
- Jun 06, 2014
-
-
James Page authored
-
- Mar 25, 2014
-
-
Edward Hope-Morley authored
-
- Dec 12, 2013
-
-
Edward Hope-Morley authored
Fixes: bug 1259919
-
- Dec 17, 2012
-
-
James Page authored
-
- Oct 19, 2012
-
-
James Page authored
-
- Oct 09, 2012
-
-
James Page authored
-
- Oct 08, 2012
-
-
James Page authored
-