diff --git a/web/support/kb/general/cinderVolumeMigration.rst b/web/support/kb/general/cinderVolumeMigration.rst
index 9aebd312f32facda97e72f50ba46426f3e59a7d7..6ee038a9a31e1018209e663ce30e4578112395d0 100644
--- a/web/support/kb/general/cinderVolumeMigration.rst
+++ b/web/support/kb/general/cinderVolumeMigration.rst
@@ -13,11 +13,11 @@ storage backend, that the source and destination Ceph cluster names differ,
 and that the Ceph pool names differ, specifically:
 
 * at the source site, Ceph cluster name is ``ceph`` and Cinder/Glance Ceph pool
-  names are oscinder/osglance
+  names are ``oscinder/osglance`` 
 * at the destination site, Ceph cluster name is ``cephpa1`` and Cinder/Glance Ceph
-  pool names are cinder-ceph-ct1-cl1/glance-ct1-cl1
+  pool names are ``cinder-ceph-ct1-cl1/glance-ct1-cl1``
 
-We also assume that Ceph mirroring via rbd-mirror has already been configured.
+We also assume that Ceph mirroring via ``rbd-mirror`` has already been configured.
 
 Specifically, we assume:
 
@@ -168,17 +168,15 @@ OpenStack source
 
 The following commands will be issued by the OpenStack tenant admin.
 
-It will be convenient to::
+It will be convenient to set a bunch of environment variables::
 
    export volID=<hexString> # where <hexString> is the Cinder volume UUID
-
-We will also define a bunch of other variables::
-   
-   export srvOSname=serverX
+   export srvOSname=<serverName>
    export imgOSname=${srvOSname}_vda_img ; echo $imgOSname # this will be the OpenStack name of the glance image
 
-If the source server has additional disks defined in /etc/fstab you may want to comment the relevant
-entries before performing the snapshot, and uncomment them right after the snapshot has been taken::
+If the source server has additional disks defined in /etc/fstab you may want to comment the
+relevant entries before performing the snapshot, and uncomment them right after the snapshot
+has been taken. Then execute::
 
    cinder upload-to-image --force True --disk-format qcow2 $volID ${imgOSname}
 
@@ -187,57 +185,93 @@ then we need to get the name of the newly-created image::
    export imgOSid=`glance image-list | grep $imgOSname | awk '{print $2}'` ; echo $imgOSid
    glance image-show $imgOSid # wait until the command returns status=active
 
-Activate mirroring for the image by enabling the exclusive-lock and journaling features: note that within 
-Glance the image name corresponds to the Glance UUID alone, without any prefix.
+Activate mirroring for the image by enabling the exclusive-lock and journaling features:
+note that within Glance the image name corresponds to the Glance UUID alone, without any prefix.
 
-On the client OpenStack source, prepare and execute the following command (**Command_one**): this command's
-output should be later executed by the Ceph source admin::
+On the client OpenStack source, prepare and execute the following commands:
 
-   echo "echo Command_One" ; echo "export imgOSid=$imgOSid" ; \
+- command (**Command_ceph_src**), whose output should be later executed by the Ceph source admin::
+
+   echo "echo Command_ceph_src" ; echo "export imgOSid=$imgOSid" ; \
    echo "rbd --cluster ceph -p osglance feature enable \$imgOSid exclusive-lock ; \
    rbd --cluster ceph -p osglance feature enable \$imgOSid journaling ; \
    rbd --cluster ceph -p osglance info \$imgOSid ; echo sleep ; sleep 10 ; \
    rbd --cluster cephpa1 -p osglance info \$imgOSid"
    
-Also prepare and execute this other command (**Command_two**): this command's output should be later executed
-by the OpenStack destination tenant admin::
+- command (**Command_ceph_dst**), whose output should be later executed by the Ceph destination
+  admin::
+
+   echo "echo Command_ceph_dst: Ceph destination" ; echo "export imgOSid=$imgOSid" ; \
+   echo "rbd --cluster cephpa1 -p osglance ls -l | grep \$imgOSid ; echo === ; sleep 5" ; \
+   echo "rbd --cluster cephpa1 -p osglance cp \$imgOSid glance-ct1-cl1/\$imgOSid ; \
+   rbd --cluster cephpa1 -p glance-ct1-cl1  snap create \${imgOSid}@snap ; \
+   rbd --cluster cephpa1 -p glance-ct1-cl1 snap protect \${imgOSid}@snap ; sleep 2 ; \
+   rbd --cluster cephpa1 -p glance-ct1-cl1 ls -l | grep \$imgOSid"
 
-   echo "echo Command Two" ; echo "export imgOSid=$imgOSid ; \
+- command (**Command_openstack**), whose output should be later executed by the OpenStack
+  destination tenant admin::
+
+   echo "echo Command_openstack" ; echo "export imgOSid=$imgOSid ; \
    export imgOSname=$imgOSname ; \
    echo glance --os-image-api-version 1 image-create --name \$imgOSname --store rbd --disk-format qcow2 --container-format bare --location rbd://cephpa1/glance-pa1-cl1/\${imgOSid}/snap"
 
+- command (**Command_clean**), whose output should be later executed by the Ceph
+  source admin, after the image has successfully made it to the new OpenStack cluster::
+
+   echo "echo Command_clean: Ceph source" ; echo "export imgOSid=$imgOSid" ;
+   echo "rbd --cluster ceph -p osglance ls -l | grep \$imgOSid ; echo === ; sleep 5" ; \
+   echo "rbd --cluster ceph -p osglance feature disable \$imgOSid journaling ; \
+   rbd --cluster ceph -p osglance feature disable \$imgOSid exclusive-lock ; \
+   rbd --cluster ceph -p osglance info \$imgOSid ; echo sleep ; sleep 10 ; \
+   rbd --cluster ceph -p osglance info \$imgOSid"
 
+    
 Ceph source
 ^^^^^^^^^^^
 
-Execute the output of **Command_one** above, prepared by the OpenStack source tenant admin.
+Execute the output of **Command_ceph_src** above, prepared by the OpenStack source tenant admin.
+
 
 Ceph destination
 ^^^^^^^^^^^^^^^^
 
-Check the image has been properly mirrored::
+Execute the output of **Command_ceph_dst** above, prepared by the OpenStack source tenant admin.
+
+Note that you should proceed only after the output of::
 
    rbd --cluster cephpa1 -p osglance ls -l | grep $imgOSid
 
-Copy the image to the pool managed by Glance and prepare the snapshot Glance expects to find::
+shows the mirroring has completed.
+   
+The other commands take care of copying the image to the pool managed by Glance and prepare
+the snapshot Glance expects to find::
+
+Note that we copy the image to the Glance pool rather than ask Glance to point the
+non-standard pool, otherwise the procedure would get a bit more complicated once we perform
+the cleanup (namely, the destination image would disappear as soon as the mirroring is switched
+off, and we would tolerate an error message when deleting the image from the destination pool).
 
-   rbd --cluster cephpa1 -p osglance cp $imgOSid glance-pa1-cl1/${imgOSid} ; \
-   rbd --cluster cephpa1 -p glance-pa1-cl1 snap create ${imgOSid}@snap ; \
-   rbd --cluster cephpa1 -p glance-pa1-cl1 snap protect ${imgOSid}@snap
-   rbd --cluster cephpa1 -p glance-pa1-cl1 ls -l | grep $imgOSid
 
-Note that we copy the image to the Glance pool rather than ask Glance to point the non-standard pool, otherwise
-the procedure would get a bit more complicated once we perform the cleanup (namely, the destination image would
-disappear as soon as the mirroring is switched off, and we would tolerate an error message when deleting the image
-from the destination pool).
+Ceph source: cleanup
+^^^^^^^^^^^^^^^^^^^^
 
-At this point, you can clean things up by deleting the volume in the source Ceph pool.
+At this point, you can clean things up by:
+
+- execute the output of **Command_clean** above, prepared by the OpenStack source tenant admin
+
+- deleting the volume in the source Ceph pool (this part is not comprised in the above
+  command set, to be conservative)
 
 OpenStack destination
 ^^^^^^^^^^^^^^^^^^^^^
    
-Using tenant admin credentials, execute the command **Command_two** prepared on the source OpenStack cluster.
+Using tenant admin credentials, execute the command **Command_openstack** prepared on the
+source OpenStack cluster.
 
 At this point, in the destination OpenStack cluster you can launch a new instance from the image.
 
 .. warning:: Make sure the disk for such instance is at least as big as the one of the original instance.
+
+	     
+
+