diff --git a/web/support/kb/objstore/rclone_quick_tutorial.rst b/web/support/kb/objstore/rclone_quick_tutorial.rst
index 36104f59c4bd424af91614be885a0741822861f4..d7674ec8c5880d370c12ff1367fcd583dd16a42a 100644
--- a/web/support/kb/objstore/rclone_quick_tutorial.rst
+++ b/web/support/kb/objstore/rclone_quick_tutorial.rst
@@ -10,7 +10,7 @@ Installing and configuring
 If not yet installed on your machine, you can install **rclone** with the following command::
 
     $ curl https://rclone.org/install.sh | sudo bash
-  
+
 **Windows:**
 
 In order to use Rclone on Windows systems, you need a bash. If you don't have one yet, you can download **git bash** from:
@@ -72,7 +72,7 @@ Mind that ``env_auth = true`` takes variables from environment, so you shouldn't
 
 Case 3: Use EC2 credentials
 """"""""""""""""""""""""""
-First, you need to install the Openstack cli as described here in the `cli tutorial <https://cloud.garr.it/compute/install-cli/>`.
+First, you need to install the Openstack cli as described here in the `cli tutorial <https://cloud.garr.it/compute/install-cli/>`_.
 
 Then execute the content of the file::
 
@@ -102,7 +102,7 @@ Add the following text to *rclone.conf*::
 
 .. note::
 
-    You can use EC2 credentials to access object storage with other tools. Check `S3 interface to object storage <https://cloud.garr.it/support/kb/objstore/s3_quick_tutorial/>`.
+    You can use EC2 credentials to access object storage with other tools. Check `S3 interface to object storage <https://cloud.garr.it/support/kb/objstore/s3_quick_tutorial/>`_.
 
 Check configuration
 ^^^^^^^^^^^^^^^^^^^
diff --git a/web/support/kb/objstore/s3_quick_tutorial.rst b/web/support/kb/objstore/s3_quick_tutorial.rst
index 9a942d3f76c3b4fe5a351031248196e92af28b5e..235d6d6526d1c4a3f3b8a741624b17793d3fc7c1 100644
--- a/web/support/kb/objstore/s3_quick_tutorial.rst
+++ b/web/support/kb/objstore/s3_quick_tutorial.rst
@@ -15,7 +15,7 @@ Create EC2 credentials
 
 Create and download an `application credential <https://cloud.garr.it/compute/app-credential/>`_ from openstack dashboard as *app-credentials.sh*.
 
-You need to install the Openstack cli as described here in the `cli tutorial <https://cloud.garr.it/compute/install-cli/>`.
+You need to install the Openstack cli as described here in the `cli tutorial <https://cloud.garr.it/compute/install-cli/>`_.
 
 Then execute the content of the file::
 
@@ -24,7 +24,7 @@ Then execute the content of the file::
 And create the ec2 credentials::
 
     $ openstack ec2 credentials create -c access -c secret  -f value | paste -sd: | tee ~/.passwd-s3fs ~/.s3cfg
-    
+
 
 S3cmd: manipulate object storage from command line
 --------------------------------------------------
@@ -43,12 +43,12 @@ Configure environment
 *********************
 
 Modify ``~/.s3cfg``::
-	
+
 	$ nano `~/.s3cfg
-	
+
 Comment the content adding ``#`` before the string::
-	
-	#<ACCESS-KEY>:<SECRET-KEY> 
+
+	#<ACCESS-KEY>:<SECRET-KEY>
 
 and add the following content::
 
@@ -65,7 +65,7 @@ the list of buckets in your object storage area::
   $ s3cmd ls
   2022-04-21 10:31  s3://fulvio
   .....
-  
+
 
 Use s3cmd
 *********
@@ -74,43 +74,43 @@ Short summary of most common commands, for more information please visit
 the `s3cmd`_ page.
 
 List contents::
-  
+
   $ s3cmd ls
   2022-04-21 10:31  s3://fulvio
 
 Create new bucket::
-  
+
   $ s3cmd mb s3://mynewbucket/
   Bucket 's3://mynewbucket/' created
 
 Put file::
-  
+
   $ s3cmd put testsmallfile s3://mynewbucket/
   upload: 'testsmallfile' -> 's3://mynewbucket/testsmallfile'  [1 of 1]
   10485760 of 10485760   100% in    0s    30.37 MB/s  done
 
 Recursive copy, put whole directory (note missing trailing '/')::
-  
+
   $ s3cmd put -r testdir s3://mynewbucket/
   upload: 'testdir/aRandomFile.png' -> 's3://mynewbucket/testdir/aRandomFile.png'  [1 of 1]
   67819 of 67819   100% in    0s  1577.91 kB/s  done
 
 Get a file (destination file name can be omitted, default to same name as remote)::
-  
+
   $ s3cmd get s3://mynewbucket/testdir/aRandomFile.png copyOfRandomFile.png
   download: 's3://mynewbucket/testdir/aRandomFile.png' -> 'copyOfRandomFile.png'  [1 of 1]
   67819 of 67819   100% in    0s     3.05 MB/s  done
 
 Delete a file (enable recursion with '-r')::
-  
-   $ s3cmd del s3://mynewbucket/testdir/aRandomFile.png 
+
+   $ s3cmd del s3://mynewbucket/testdir/aRandomFile.png
    delete: 's3://mynewbucket/testdir/aRandomFile.png'
 
 Delete bucket (must be empty)::
-  
+
    $ s3cmd rb s3://mynewbucket
    Bucket 's3://mynewbucket/' removed
-  
+
 
 S3fs: mount a container as a filesystem
 ---------------------------------------
@@ -128,10 +128,10 @@ Check the version::
 
 N.B. These instructions refer to version 1.86 available on Ubuntu 20.04. Different versions may require different configuration.
 
-Uncomment *user_allow_other* option by removing the *#*:: 
+Uncomment *user_allow_other* option by removing the *#*::
 
     $ nano /etc/fuse.conf
-    
+
 Assign the right permissions to configuration file::
 
     $ chmod 600 ~/.passwd-s3fs
@@ -139,9 +139,9 @@ Assign the right permissions to configuration file::
 Mount a container
 ********************
 First, you need to create the container on your openstack project. Then you can mount your container on your local directory.
-Assume that you have a container named *test_container* and a local directory named *test_dir*:: 
+Assume that you have a container named *test_container* and a local directory named *test_dir*::
 
-    $ s3fs test_container test_dir -o  allow_other -o host=https://swift.cloud.garr.it -o use_path_request_style -o umask=000 
+    $ s3fs test_container test_dir -o  allow_other -o host=https://swift.cloud.garr.it -o use_path_request_style -o umask=000
 
 Now your container has been mounted on *test_dir* directory. You can access it and every change you make inside the directory is istantly made inside the container.