diff correctGCBias.xml @ 13:cbf06812f848 draft

planemo upload for repository https://github.com/fidelram/deepTools/tree/master/galaxy/wrapper/ commit 13910e1a5ebcfc740c1bc5e38fc676592ef44f11
author bgruening
date Mon, 15 Feb 2016 10:07:44 -0500
parents b77b2ea431fe
children eb5c587f5fc7
line wrap: on
line diff
--- a/correctGCBias.xml	Mon Jan 25 19:50:55 2016 -0500
+++ b/correctGCBias.xml	Mon Feb 15 10:07:44 2016 -0500
@@ -53,18 +53,22 @@
     </tests>
     <help>
 <![CDATA[
-**What it does**
 
-This tool requires the output from computeGCBias to correct a given BAM file according to the method proposed in
-Benjamini and Speed (2012) Nucleic Acids Res.
+What it does
+-------------
+
+This tool requires the output from computeGCBias to correct a given BAM file according to the method proposed in Benjamini and Speed (2012) Nucleic Acids Res. It will simply remove reads from regions with too high coverage compared to the expected values (typically GC-rich regions) and will add reads to regions where too few reads are seen (typically AT-rich regions). 
 The resulting BAM file can be used in any downstream analyses, but be aware that you should not filter out duplicates from here on.
 
-You can find more details on the correctGCBias doc page: https://deeptools.readthedocs.org/en/master/content/tools/correctGCBias.html
+See the description of ``computeGCBias`` to read up on the details of the GC bias assessment and correction method.
 
 
-**Output files**:
+Output files
+----------------
 
-- GC-normalized BAM file
+``correctGCbias`` only has one output: a BAM file where read densities have been changed to reflect the expected read distribution based on the genome.
+
+**Warning!** The GC-corrected BAM file will most likely contain several duplicated reads in regions where the coverage had to increased in order to match the expected read density. This means that you should absolutely avoid using any filtering of duplicate reads during your downstream analyses!
 
 -----