diff --git a/web/support/kb/general/cinderVolumeMigration.rst b/web/support/kb/general/cinderVolumeMigration.rst
index ca00dc8e06cdbdb9c79cece98892c76a4bc5dd7f..112ca38efdd6e2f311c394c2f8047fadd4cf8bee 100644
--- a/web/support/kb/general/cinderVolumeMigration.rst
+++ b/web/support/kb/general/cinderVolumeMigration.rst
@@ -104,22 +104,8 @@ by executing::
   rbd --cluster cephpa1 cp oscinder/$volName cinder-ceph-pa1-cl1/$volName
   rbd --cluster cephpa1 -p cinder-ceph-pa1-cl1 ls -l | grep $volName
 
-
-.. note:: Although we keep the original volume name, after the import the Ceph volume
-	  will be renamed to ``volume-<newHexString>``.
-
-
-At this point, if you feel brave enough, you can clean things up by
-
-* deleting the volume in the source Ceph pool, or
-* turning the "mirroring" off, which will cause the mirrored volume to disappear, and removing
-  the mirroring features from the source volume
-
-In this example we will opt for the second possibility::
-
-   rbd --cluster ceph -p oscinder mirror image disable $volName
-   rbd --cluster ceph -p oscinder feature disable $volName exclusive-lock
-   rbd --cluster ceph -p oscinder feature disable $volName journaling
+.. note:: Although we keep the original volume name, after the OpenStack import described
+	  in next paragraph the Ceph volume will be renamed to ``volume-<newHexString>``.
 
 
 OpenStack part
@@ -143,6 +129,22 @@ to ``volume-<newHexString>``.
 From the OpenStack GUI, you can now attach the volume to a virtual machine.
 
 
+Ceph part, final
+^^^^^^^^^^^^^^^^
+
+At this point, if you feel brave enough, you can clean things up by
+
+* deleting the volume in the source Ceph pool, or
+* turning the "mirroring" off, which will cause the mirrored volume to disappear, and removing
+  the mirroring features from the source volume
+
+In this example we will opt for the second possibility::
+
+   rbd --cluster ceph -p oscinder mirror image disable $volName
+   rbd --cluster ceph -p oscinder feature disable $volName exclusive-lock
+   rbd --cluster ceph -p oscinder feature disable $volName journaling
+
+
 Volume migration via Glance
 ---------------------------
 
@@ -188,8 +190,8 @@ then we need to get the name of the newly-created image::
 Activate mirroring for the image by enabling the exclusive-lock and journaling features: note that within 
 Glance the image name corresponds to the Glance UUID alone, without any prefix.
 
-On the client OpenStack source, prepare the following command (**command_one**) whose output is to
-be later executed by the Ceph source admin::
+On the client OpenStack source, prepare and execute the following command (**Command_one**): this command's
+output should be later executed by the Ceph source admin::
 
    echo "echo Command_One" ; echo "export imgOSid=$imgOSid" ; \
    echo "rbd --cluster ceph -p osglance feature enable \$imgOSid exclusive-lock ; \
@@ -197,8 +199,8 @@ be later executed by the Ceph source admin::
    rbd --cluster ceph -p osglance info \$imgOSid ; echo sleep ; sleep 10 ; \
    rbd --cluster cephpa1 -p osglance info \$imgOSid"
    
-Also prepare this other command (**command_two**) whose output will be executed by the OpenStack destination
-tenant admin::
+Also prepare and execute this other command (**Command_two**): this command's output should be later executed
+by the OpenStack destination tenant admin::
 
    echo "echo Command Two" ; echo "export imgOSid=$imgOSid ; \
    export imgOSname=$imgOSname ; \
@@ -208,7 +210,7 @@ tenant admin::
 Ceph source
 ^^^^^^^^^^^
 
-Execute the output of **command_one** above, prepared by the OpenStack source tenant admin.
+Execute the output of **Command_one** above, prepared by the OpenStack source tenant admin.
 
 Ceph destination
 ^^^^^^^^^^^^^^^^
@@ -234,7 +236,7 @@ At this point, you can clean things up by deleting the volume in the source Ceph
 OpenStack destination
 ^^^^^^^^^^^^^^^^^^^^^
    
-Using tenant admin credentials, execute the command **command_two** prepared on the source OpenStack cluster.
+Using tenant admin credentials, execute the command **Command_two** prepared on the source OpenStack cluster.
 
 At this point, in the destination OpenStack cluster you can launch a new instance from the image.