Previous changeset 10:2c8931827fa5 (2015-03-30) Next changeset 12:cb25a70933ea (2017-09-15) |
Commit message:
planemo upload for repository https://github.com/peterjc/pico_galaxy/tree/master/workflows/secreted_protein_workflow commit 4bd49529e9ca2096cd875e98daf7190d13fa8d0b-dirty |
modified:
README.rst repository_dependencies.xml |
added:
secreted_protein_workflow.ga |
removed:
N_abberans_piechart_mouseover.png blast_top_hit_species.ga blast_top_hit_species.png |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 N_abberans_piechart_mouseover.png |
b |
Binary file N_abberans_piechart_mouseover.png has changed |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 README.rst --- a/README.rst Mon Mar 30 11:46:13 2015 -0400 +++ b/README.rst Wed Feb 01 13:21:32 2017 -0500 |
b |
b'@@ -1,180 +1,99 @@\n-Introduction\n-============\n-\n-Galaxy is a web-based platform for biological data analysis, supporting\n-extension with additional tools (often wrappers for existing command line\n-tools) and datatypes. See http://www.galaxyproject.org/ and the public\n-server at http://usegalaxy.org for an example.\n+This is package is a Galaxy workflow for the identification of candidate\n+secreted proteins from a given protein FASTA file.\n \n-The NCBI BLAST suite is a widely used set of tools for biological sequence\n-comparison. It is available as standalone binaries for use at the command\n-line, and via the NCBI website for smaller searches. For more details see\n-http://blast.ncbi.nlm.nih.gov/Blast.cgi\n+It runs SignalP v3.0 (Bendtsen et al. 2004) and selects only proteins with a\n+strong predicted signal peptide, and then runs TMHMM v2.0 (Krogh et al. 2001)\n+on those, and selects only proteins without a predicted trans-membrane helix.\n+This workflow was used in Kikuchi et al. (2011), and is a simplification of\n+the candidate effector protocol described in Jones et al. (2009).\n \n-This is an example workflow using the Galaxy wrappers for NCBI BLAST+,\n-see https://github.com/peterjc/galaxy_blast\n+See http://www.galaxyproject.org for information about the Galaxy Project.\n \n \n-Galaxy workflow for counting species of top BLAST hits \n-======================================================\n+Availability\n+============\n \n-This Galaxy workflow (file ``blast_top_hit_species.ga``) is intended for an\n-initial assessment of a transcriptome assembly to give a crude indication of\n-any major contamination present based on the species of the top BLAST hit\n-of 1000 representative sequences.\n+This workflow is available to download and/or install from the main\n+Galaxy Tool Shed:\n \n-.. image:: https://raw.githubusercontent.com/peterjc/galaxy_blast/master/workflows/blast_top_hit_species/blast_top_hit_species.png\n-\n-In words, the workflow proceeds as follows:\n+http://toolshed.g2.bx.psu.edu/view/peterjc/secreted_protein_workflow\n \n-1. Upload/import your transcriptome assembly or any nucleotide FASTA file.\n-2. Samples 1000 representative sequences, selected uniformly/evenly though\n- the file.\n-3. Convert the sampled FASTA file into a three column tabular file.\n-4. Runs NCBI BLASTX of the sampled FASTA file against the latest NCBI ``nr``\n- database (assuming this is already available setup on your local Galaxy\n- under the alias ``nr``), requesting tabular output including the taxonomy\n- fields, and at most one matching target sequence.\n-5. Remove any duplicate alignments (multiple HSPs for the same match).\n-6. Combine the filtered BLAST output with the tabular version of the 1000\n- sequences to give a new tabular file with exactly 1000 lines, adding\n- ``None`` for sequences missing a BLAST hit.\n-7. Count the BLAST species names in this file.\n-8. Sort the counts.\n+Test releases (which should not normally be used) are on the Test Tool Shed:\n+\n+http://testtoolshed.g2.bx.psu.edu/view/peterjc/secreted_protein_workflow\n \n-Finally we would suggest visualising the sorted tally table as a Pie Chart.\n+Development is being done on github here:\n+\n+https://github.com/peterjc/pico_galaxy/tree/master/workflows/secreted_protein_workflow\n \n \n Sample Data\n ===========\n \n-As an example, you can upload the transcriptome assembly of the nematode\n-*Nacobbus abberans* from Eves van den Akker *et al.* (2015),\n-http://dx.doi.org/10.1093/gbe/evu171 using this URL:\n-\n-http://nematode.net/Data/nacobbus_aberrans_transcript_assembly/N.abberans_reference_no_contam.zip\n-\n-Running this workflow with a copy of the NCBI non-redundant ``nr`` database\n-from 16 Oct 2014 (which did **not** contain this *N. abberans* dataset) gave\n-the following results - note 609 out of the 1000 sequences gave no BLAST hit.\n-\n-===== ==================\n-Count Subject Blast Name\n------ ------------------\n- 609 None\n- 244 nematodes\n- 30 ascomycetes\n- 27 eukaryotes\n- 8 basidiomycetes\n- 6 a'..b'u should also cite Galaxy, and the NCBI BLAST+ tools:\n+Peter J.A. Cock, Bj\xc3\xb6rn A. Gr\xc3\xbcning, Konrad Paszkiewicz and Leighton Pritchard (2013).\n+Galaxy tools and workflows for sequence analysis with applications\n+in molecular plant pathology. PeerJ 1:e167\n+http://dx.doi.org/10.7717/peerj.167\n \n-BLAST+: architecture and applications.\n-C. Camacho et al. BMC Bioinformatics 2009, 10:421.\n-DOI: http://dx.doi.org/10.1186/1471-2105-10-421\n+Bendtsen, J.D., Nielsen, H., von Heijne, G., Brunak, S. (2004)\n+Improved prediction of signal peptides: SignalP 3.0. J Mol Biol 340: 783\xe2\x80\x9395.\n+http://dx.doi.org/10.1016/j.jmb.2004.05.028\n+\n+Krogh, A., Larsson, B., von Heijne, G., Sonnhammer, E. (2001)\n+Predicting transmembrane protein topology with a hidden Markov model:\n+application to complete genomes. J Mol Biol 305: 567- 580.\n+http://dx.doi.org/10.1006/jmbi.2000.4315\n \n \n-Automated Installation\n-======================\n+Additional References\n+=====================\n+\n+Kikuchi, T., Cotton, J.A., Dalzell, J.J., Hasegawa. K., et al. (2011)\n+Genomic insights into the origin of parasitism in the emerging plant\n+pathogen *Bursaphelenchus xylophilus*. PLoS Pathog 7: e1002219.\n+http://dx.doi.org/10.1371/journal.ppat.1002219\n \n-Installation via the Galaxy Tool Shed should take care of the dependencies\n-on Galaxy tools including the NCBI BLAST+ wrappers and associated binaries.\n+Jones, J.T., Kumar, A., Pylypenko, L.A., Thirugnanasambandam, A., et al. (2009)\n+Identification and functional characterization of effectors in expressed\n+sequence tags from various life cycle stages of the potato cyst nematode\n+*Globodera pallida*. Mol Plant Pathol 10: 815\xe2\x80\x9328.\n+http://dx.doi.org/10.1111/j.1364-3703.2009.00585.x\n+\n \n-However, this workflow requires a current version of the NCBI nr protein\n-BLAST database to be listed in ``blastdb_p.loc`` with the key ``nr`` (lower\n-case).\n+Dependencies\n+============\n+\n+These dependencies should be resolved automatically via the Galaxy Tool Shed:\n+\n+* http://toolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp\n+* http://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id\n+\n+However, at the time of writing those Galaxy tools have their own\n+dependencies required for this workflow which require manual\n+installation (SignalP v3.0 and TMHMM v2.0).\n \n \n History\n@@ -183,7 +102,13 @@\n ======= ======================================================================\n Version Changes\n ------- ----------------------------------------------------------------------\n-v0.1.0 - Initial Tool Shed release, targetting NCBI BLAST+ 2.2.29\n+v0.0.1 - Initial release to Tool Shed (May, 2013)\n+ - Expanded README file to include example data\n+v0.0.2 - Updated versions of the tools used, inclulding core Galaxy Filter\n+ tool to avoid warning about new ``header_lines`` parameter.\n+ - Added link to Tool Shed in the workflow annotation explaining there\n+ is a README file with sample data, and a requested citation.\n+v0.0.3 - Use MIT licence.\n ======= ======================================================================\n \n \n@@ -192,20 +117,18 @@\n \n This workflow is under source code control here:\n \n-https://github.com/peterjc/galaxy_blast/tree/master/workflows/blast_top_hit_species\n+https://github.com/peterjc/pico_galaxy/tree/master/workflows/secreted_protein_workflow\n \n To prepare the tar-ball for uploading to the Tool Shed, I use this:\n \n- $ tar -cf blast_top_hit_species.tar.gz README.rst repository_dependencies.xml blast_top_hit_species.ga blast_top_hit_species.png N_abberans_piechart_mouseover.png\n+ $ tar -cf secreted_protein_workflow.tar.gz README.rst repository_dependencies.xml secreted_protein_workflow.ga\n \n Check this,\n \n- $ tar -tzf blast_top_hit_species.tar.gz\n+ $ tar -tzf secreted_protein_workflow.tar.gz \n README.rst\n repository_dependencies.xml\n- blast_top_hit_species.ga\n- blast_top_hit_species.png\n- N_abberans_piechart_mouseover.png\n+ secreted_protein_workflow.ga\n \n \n Licence (MIT)\n' |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 blast_top_hit_species.ga --- a/blast_top_hit_species.ga Mon Mar 30 11:46:13 2015 -0400 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 |
[ |
b'@@ -1,331 +0,0 @@\n-{\n- "a_galaxy_workflow": "true", \n- "annotation": "", \n- "format-version": "0.1", \n- "name": "Species of top BLAST hits", \n- "steps": {\n- "0": {\n- "annotation": "", \n- "id": 0, \n- "input_connections": {}, \n- "inputs": [\n- {\n- "description": "", \n- "name": "Transcriptome FASTA file"\n- }\n- ], \n- "label": null, \n- "name": "Input dataset", \n- "outputs": [], \n- "position": {\n- "left": 242, \n- "top": 119\n- }, \n- "tool_errors": null, \n- "tool_id": null, \n- "tool_state": "{\\"name\\": \\"Transcriptome FASTA file\\"}", \n- "tool_version": null, \n- "type": "data_input", \n- "user_outputs": [], \n- "uuid": "e445b44b-02a7-4fd1-8944-cd680f967062"\n- }, \n- "1": {\n- "annotation": "This workflow is deliberately a simple/crude assessment, and there is no need to run BLASTX on all the sequences - a sample of 1000 should be enough.", \n- "id": 1, \n- "input_connections": {\n- "input_file": {\n- "id": 0, \n- "output_name": "output"\n- }\n- }, \n- "inputs": [], \n- "label": null, \n- "name": "Sub-sample sequences files", \n- "outputs": [\n- {\n- "name": "output_file", \n- "type": "input"\n- }\n- ], \n- "position": {\n- "left": 435, \n- "top": 119\n- }, \n- "post_job_actions": {\n- "RenameDatasetActionoutput_file": {\n- "action_arguments": {\n- "newname": "1000 sequences from #{input_file}"\n- }, \n- "action_type": "RenameDatasetAction", \n- "output_name": "output_file"\n- }\n- }, \n- "tool_errors": null, \n- "tool_id": "toolshed.g2.bx.psu.edu/repos/peterjc/sample_seqs/sample_seqs/0.2.1", \n- "tool_state": "{\\"__page__\\": 0, \\"input_file\\": \\"null\\", \\"__rerun_remap_job_id__\\": null, \\"sampling\\": \\"{\\\\\\"count\\\\\\": \\\\\\"1000\\\\\\", \\\\\\"type\\\\\\": \\\\\\"desired_count\\\\\\", \\\\\\"__current_case__\\\\\\": 2}\\", \\"chromInfo\\": \\"\\\\\\"/mnt/galaxy/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\", \\"interleaved\\": \\"\\\\\\"False\\\\\\"\\"}", \n- "tool_version": "0.2.1", \n- "type": "tool", \n- "user_outputs": [], \n- "uuid": "87ce69ef-5fb0-41b0-9575-d3b96544f8be"\n- }, \n- "2": {\n- "annotation": "We only want one line per query, so limit this to the best scoring target sequence. Assumes current NCBI nr database is available locally as \\"nr\\".", \n- "id": 2, \n- "input_connections": {\n- "query": {\n- "id": 1, \n- "output_name": "output_file"\n- }\n- }, \n- "inputs": [], \n- "label": null, \n- "name": "NCBI BLAST+ blastx", \n- "outputs": [\n- {\n- "name": "output1", \n- "type": "tabular"\n- }\n- ], \n- "position": {\n- "left": 489, \n- "top": 263\n- }, \n- "post_job_actions": {\n- "RenameDatasetActionoutput1": {\n- "action_arguments": {\n- "newname": "Top BLAST match"\n- }, \n- "action_type": "RenameDatasetAction", \n- "output_name": "output1"\n- }\n- }, \n- "tool_errors": null, \n- "tool_id": "toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_pl'..b'lue\\\\\\": \\\\\\"None\\\\\\", \\\\\\"__current_case__\\\\\\": 0}, \\\\\\"fill_columns_by\\\\\\": \\\\\\"fill_unjoined_only\\\\\\", \\\\\\"__current_case__\\\\\\": 1}\\", \\"unmatched\\": \\"\\\\\\"-u\\\\\\"\\", \\"input1\\": \\"null\\", \\"chromInfo\\": \\"\\\\\\"/mnt/galaxy/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\"}", \n- "tool_version": "2.0.2", \n- "type": "tool", \n- "user_outputs": [], \n- "uuid": "4c280b0e-b4a6-4ae4-8a81-d6e93932ef71"\n- }, \n- "6": {\n- "annotation": "Here we make a tally table of the BLAST species name column", \n- "id": 6, \n- "input_connections": {\n- "input": {\n- "id": 5, \n- "output_name": "out_file1"\n- }\n- }, \n- "inputs": [], \n- "label": null, \n- "name": "Count", \n- "outputs": [\n- {\n- "name": "out_file1", \n- "type": "tabular"\n- }\n- ], \n- "position": {\n- "left": 952, \n- "top": 398\n- }, \n- "post_job_actions": {\n- "HideDatasetActionout_file1": {\n- "action_arguments": {}, \n- "action_type": "HideDatasetAction", \n- "output_name": "out_file1"\n- }, \n- "RenameDatasetActionout_file1": {\n- "action_arguments": {\n- "newname": "Top BLAST hit species counts (unsorted)"\n- }, \n- "action_type": "RenameDatasetAction", \n- "output_name": "out_file1"\n- }\n- }, \n- "tool_errors": null, \n- "tool_id": "Count1", \n- "tool_state": "{\\"__page__\\": 0, \\"column\\": \\"{\\\\\\"__class__\\\\\\": \\\\\\"UnvalidatedValue\\\\\\", \\\\\\"value\\\\\\": [\\\\\\"19\\\\\\"]}\\", \\"__rerun_remap_job_id__\\": null, \\"delim\\": \\"\\\\\\"T\\\\\\"\\", \\"input\\": \\"null\\", \\"chromInfo\\": \\"\\\\\\"/mnt/galaxy/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\"}", \n- "tool_version": "1.0.0", \n- "type": "tool", \n- "user_outputs": [], \n- "uuid": "d3322137-1911-426d-87a7-c82b5fc16825"\n- }, \n- "7": {\n- "annotation": "Sorting the counts makes the results easier to interpret directly.", \n- "id": 7, \n- "input_connections": {\n- "input": {\n- "id": 6, \n- "output_name": "out_file1"\n- }\n- }, \n- "inputs": [], \n- "label": null, \n- "name": "Sort", \n- "outputs": [\n- {\n- "name": "out_file1", \n- "type": "input"\n- }\n- ], \n- "position": {\n- "left": 1056, \n- "top": 506\n- }, \n- "post_job_actions": {\n- "RenameDatasetActionout_file1": {\n- "action_arguments": {\n- "newname": "Top BLAST hit species counts"\n- }, \n- "action_type": "RenameDatasetAction", \n- "output_name": "out_file1"\n- }\n- }, \n- "tool_errors": null, \n- "tool_id": "sort1", \n- "tool_state": "{\\"__page__\\": 0, \\"style\\": \\"\\\\\\"num\\\\\\"\\", \\"column\\": \\"{\\\\\\"__class__\\\\\\": \\\\\\"UnvalidatedValue\\\\\\", \\\\\\"value\\\\\\": \\\\\\"1\\\\\\"}\\", \\"__rerun_remap_job_id__\\": null, \\"column_set\\": \\"[]\\", \\"input\\": \\"null\\", \\"chromInfo\\": \\"\\\\\\"/mnt/galaxy/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\", \\"order\\": \\"\\\\\\"DESC\\\\\\"\\"}", \n- "tool_version": "1.0.3", \n- "type": "tool", \n- "user_outputs": [], \n- "uuid": "c81cc61d-52a3-44ee-b646-b23e0e004c38"\n- }\n- }, \n- "uuid": "9fe8754a-3a87-4f6a-89a2-141b02b4793e"\n-}\n\\ No newline at end of file\n' |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 blast_top_hit_species.png |
b |
Binary file blast_top_hit_species.png has changed |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 repository_dependencies.xml --- a/repository_dependencies.xml Mon Mar 30 11:46:13 2015 -0400 +++ b/repository_dependencies.xml Wed Feb 01 13:21:32 2017 -0500 |
b |
@@ -1,9 +1,7 @@ <?xml version="1.0"?> -<repositories description="This workflow requires the NCBI BLAST+ tools etc"> - <repository changeset_revision="5e9d5e536b79" name="ncbi_blast_plus" owner="devteam" toolshed="https://testtoolshed.g2.bx.psu.edu" /> - <repository changeset_revision="ae709fd50581" name="fasta_to_tabular" owner="devteam" toolshed="https://testtoolshed.g2.bx.psu.edu" /> - <repository changeset_revision="4231c585b6dd" name="sample_seqs" owner="peterjc" toolshed="https://testtoolshed.g2.bx.psu.edu" /> - <repository changeset_revision="2064ae2602b1" name="unique" owner="bgruening" toolshed="https://testtoolshed.g2.bx.psu.edu" /> - <!-- Also uses tool_id join1, Count1, and sort1 which are currently - still shipped with Galaxy itself rather than via the Tool Shed --> +<repositories description="This requires my SignalP and TMHMM wrapers, and my FASTA filtering tool."> + <!-- Revision 15:6abd809cefdd on the main tool shed is v0.2.4, the current latest - but older should be OK --> + <repository changeset_revision="3cb02adf4326" name="tmhmm_and_signalp" owner="peterjc" toolshed="https://testtoolshed.g2.bx.psu.edu" /> + <!-- Revision 2:abdd608c869b on the main tool shed is v0.0.5, the current latest - but older should be OK --> + <repository changeset_revision="bc263e94ea98" name="seq_filter_by_id" owner="peterjc" toolshed="https://testtoolshed.g2.bx.psu.edu" /> </repositories> |
b |
diff -r 2c8931827fa5 -r 99209ed2ec87 secreted_protein_workflow.ga --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/secreted_protein_workflow.ga Wed Feb 01 13:21:32 2017 -0500 |
[ |
b'@@ -0,0 +1,288 @@\n+{\n+ "a_galaxy_workflow": "true", \n+ "annotation": "Runs SignalP v3.0 and TMHMM v2.0 to look for secreted proteins.<br />\\n<br />\\nThis workflow is <a href=\\"http://toolshed.g2.bx.psu.edu/view/peterjc/secreted_protein_workflow\\" target=\\"_blank\\">available on the Galaxy Tool Shed</a> with a README file giving more information including sample data, and full citation details (Cock and Pritchard 2014).", \n+ "format-version": "0.1", \n+ "name": "Find secreted proteins with TMHMM and SignalP", \n+ "steps": {\n+ "0": {\n+ "annotation": "", \n+ "id": 0, \n+ "input_connections": {}, \n+ "inputs": [\n+ {\n+ "description": "", \n+ "name": "Input Dataset"\n+ }\n+ ], \n+ "name": "Input dataset", \n+ "outputs": [], \n+ "position": {\n+ "left": 200, \n+ "top": 200\n+ }, \n+ "tool_errors": null, \n+ "tool_id": null, \n+ "tool_state": "{\\"name\\": \\"Input Dataset\\"}", \n+ "tool_version": null, \n+ "type": "data_input", \n+ "user_outputs": []\n+ }, \n+ "1": {\n+ "annotation": "", \n+ "id": 1, \n+ "input_connections": {\n+ "fasta_file": {\n+ "id": 0, \n+ "output_name": "output"\n+ }\n+ }, \n+ "inputs": [\n+ {\n+ "description": "runtime parameter for tool SignalP 3.0", \n+ "name": "organism"\n+ }\n+ ], \n+ "name": "SignalP 3.0", \n+ "outputs": [\n+ {\n+ "name": "tabular_file", \n+ "type": "tabular"\n+ }\n+ ], \n+ "position": {\n+ "left": 240, \n+ "top": 341\n+ }, \n+ "post_job_actions": {\n+ "HideDatasetActiontabular_file": {\n+ "action_arguments": {}, \n+ "action_type": "HideDatasetAction", \n+ "output_name": "tabular_file"\n+ }\n+ }, \n+ "tool_errors": null, \n+ "tool_id": "signalp3", \n+ "tool_state": "{\\"__page__\\": 0, \\"truncate\\": \\"\\\\\\"60\\\\\\"\\", \\"chromInfo\\": \\"\\\\\\"/opt/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\", \\"fasta_file\\": \\"null\\", \\"organism\\": \\"{\\\\\\"__class__\\\\\\": \\\\\\"RuntimeValue\\\\\\"}\\", \\"__rerun_remap_job_id__\\": null}", \n+ "tool_version": "0.0.12", \n+ "type": "tool", \n+ "user_outputs": []\n+ }, \n+ "2": {\n+ "annotation": "Select proteins with predicted signal peptide (SignalP NN D-Score or HMM)", \n+ "id": 2, \n+ "input_connections": {\n+ "input": {\n+ "id": 1, \n+ "output_name": "tabular_file"\n+ }\n+ }, \n+ "inputs": [], \n+ "name": "Filter", \n+ "outputs": [\n+ {\n+ "name": "out_file1", \n+ "type": "input"\n+ }\n+ ], \n+ "position": {\n+ "left": 323, \n+ "top": 528\n+ }, \n+ "post_job_actions": {\n+ "HideDatasetActionout_file1": {\n+ "action_arguments": {}, \n+ "action_type": "HideDatasetAction", \n+ "output_name": "out_file1"\n+ }, \n+ "RenameDatasetActionout_file1": {\n+ "action_arguments": {\n+ "newname": "Filtered SignalP results"\n+ }, \n+ "action_type": "RenameDatasetAction", \n+ "output_name": "out_file1"\n+ }\n+ }, \n+ "tool_errors": null,'..b'-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\", \\"__rerun_remap_job_id__\\": null}", \n+ "tool_version": "0.0.11", \n+ "type": "tool", \n+ "user_outputs": []\n+ }, \n+ "5": {\n+ "annotation": "Select proteins with no predicted transmembrane helices.", \n+ "id": 5, \n+ "input_connections": {\n+ "input": {\n+ "id": 4, \n+ "output_name": "tabular_file"\n+ }\n+ }, \n+ "inputs": [], \n+ "name": "Filter", \n+ "outputs": [\n+ {\n+ "name": "out_file1", \n+ "type": "input"\n+ }\n+ ], \n+ "position": {\n+ "left": 729, \n+ "top": 566\n+ }, \n+ "post_job_actions": {\n+ "HideDatasetActionout_file1": {\n+ "action_arguments": {}, \n+ "action_type": "HideDatasetAction", \n+ "output_name": "out_file1"\n+ }, \n+ "RenameDatasetActionout_file1": {\n+ "action_arguments": {\n+ "newname": "Filtered TMHMM results"\n+ }, \n+ "action_type": "RenameDatasetAction", \n+ "output_name": "out_file1"\n+ }\n+ }, \n+ "tool_errors": null, \n+ "tool_id": "Filter1", \n+ "tool_state": "{\\"__page__\\": 0, \\"__rerun_remap_job_id__\\": null, \\"cond\\": \\"\\\\\\"c5== 0\\\\\\"\\", \\"input\\": \\"null\\", \\"header_lines\\": \\"\\\\\\"0\\\\\\"\\", \\"chromInfo\\": \\"\\\\\\"/opt/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\"}", \n+ "tool_version": "1.1.0", \n+ "type": "tool", \n+ "user_outputs": []\n+ }, \n+ "6": {\n+ "annotation": "Select those sequences with no transmembrane helices (from those with signal peptides).", \n+ "id": 6, \n+ "input_connections": {\n+ "input_file": {\n+ "id": 3, \n+ "output_name": "output_pos"\n+ }, \n+ "input_tabular": {\n+ "id": 5, \n+ "output_name": "out_file1"\n+ }\n+ }, \n+ "inputs": [], \n+ "name": "Filter sequences by ID", \n+ "outputs": [\n+ {\n+ "name": "output_pos", \n+ "type": "fasta"\n+ }, \n+ {\n+ "name": "output_neg", \n+ "type": "fasta"\n+ }\n+ ], \n+ "position": {\n+ "left": 893, \n+ "top": 281\n+ }, \n+ "post_job_actions": {\n+ "HideDatasetActionoutput_neg": {\n+ "action_arguments": {}, \n+ "action_type": "HideDatasetAction", \n+ "output_name": "output_neg"\n+ }, \n+ "RenameDatasetActionoutput_pos": {\n+ "action_arguments": {\n+ "newname": "Secreted proteins"\n+ }, \n+ "action_type": "RenameDatasetAction", \n+ "output_name": "output_pos"\n+ }\n+ }, \n+ "tool_errors": null, \n+ "tool_id": "seq_filter_by_id", \n+ "tool_state": "{\\"__page__\\": 0, \\"output_choice_cond\\": \\"{\\\\\\"output_choice\\\\\\": \\\\\\"pos\\\\\\", \\\\\\"__current_case__\\\\\\": 1}\\", \\"input_file\\": \\"null\\", \\"__rerun_remap_job_id__\\": null, \\"input_tabular\\": \\"null\\", \\"chromInfo\\": \\"\\\\\\"/opt/galaxy-dist/tool-data/shared/ucsc/chrom/?.len\\\\\\"\\", \\"columns\\": \\"{\\\\\\"__class__\\\\\\": \\\\\\"UnvalidatedValue\\\\\\", \\\\\\"value\\\\\\": [\\\\\\"1\\\\\\"]}\\"}", \n+ "tool_version": "0.0.5", \n+ "type": "tool", \n+ "user_outputs": []\n+ }\n+ }\n+}\n\\ No newline at end of file\n' |