Trying this assembly with canu 1.7 (I'm using gridOptionsred because we previously had problems with jobs running out of memory, but I have the same problem without this option):
canu gridOptions="-l h_rt=48:00:00" gridOptionsred="-l h_vmem=8g -pe smp 8" gridEngineMemoryOption="-l h_vmem=MEMORY" stopOnReadQuality=false stageDirectory=/tmp/uid-\$JOB_ID-\$SGE_TASK_ID -p asm_canu1.7 -d asm_canu1.7 genomeSize=14.1m corMhapSensitivity=normal corOutCoverage=100 minOverlapLength=3000 minReadLength=10000 -nanopore-raw reads.Albacore2.1.10.fastq.gz
Running on a Linux SGE cluster.
red jobs are configured like this:
job-ID prior name user state submit/start at queue slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
1733233 0.00000 red_asm uid qw 04/03/2018 15:16:17 8 1-5000:1
1733234 0.00000 red_asm uid qw 04/03/2018 15:16:18 8 1-5000:1
1733235 0.00000 red_asm uid qw 04/03/2018 15:16:19 8 1-5000:1
1733236 0.00000 red_asm uid qw 04/03/2018 15:16:19 8 1-5000:1
1733237 0.00000 red_asm uid qw 04/03/2018 15:16:20 8 1-5000:1
1733238 0.00000 red_asm uid qw 04/03/2018 15:16:21 8 1-5000:1
1733239 0.00000 red_asm uid qw 04/03/2018 15:16:22 8 1-5000:1
1733240 0.00000 red_asm uid qw 04/03/2018 15:16:23 8 1-5000:1
1733241 0.00000 red_asm uid qw 04/03/2018 15:16:24 8 1-5000:1
1733242 0.00000 red_asm uid qw 04/03/2018 15:16:24 8 1-5000:1
1733243 0.00000 red_asm uid qw 04/03/2018 15:16:25 8 1-5000:1
1733244 0.00000 red_asm uid qw 04/03/2018 15:16:26 8 1-5000:1
1733245 0.00000 red_asm uid qw 04/03/2018 15:16:27 8 1-5000:1
1733246 0.00000 red_asm uid qw 04/03/2018 15:16:28 8 1-5000:1
1733247 0.00000 red_asm uid qw 04/03/2018 15:16:29 8 1-5000:1
1733248 0.00000 red_asm uid qw 04/03/2018 15:16:30 8 1-5000:1
1733249 0.00000 red_asm uid qw 04/03/2018 15:16:30 8 1-5000:1
1733250 0.00000 red_asm uid qw 04/03/2018 15:16:31 8 1-5000:1
1733251 0.00000 red_asm uid qw 04/03/2018 15:16:32 8 1-5000:1
1733252 0.00000 red_asm uid qw 04/03/2018 15:16:33 8 1-5000:1
1733253 0.00000 red_asm uid qw 04/03/2018 15:16:34 8 1-5000:1
1733254 0.00000 red_asm uid qw 04/03/2018 15:16:35 8 1-5000:1
1733255 0.00000 red_asm uid qw 04/03/2018 15:16:36 8 1-5000:1
1733256 0.00000 red_asm uid qw 04/03/2018 15:16:36 8 1-5000:1
1733257 0.00000 red_asm uid qw 04/03/2018 15:16:37 8 1-5000:1
1733258 0.00000 red_asm uid qw 04/03/2018 15:16:38 8 1-5000:1
1733259 0.00000 red_asm uid qw 04/03/2018 15:16:39 8 1-5000:1
1733260 0.00000 red_asm uid qw 04/03/2018 15:16:40 8 1-5000:1
1733261 0.00000 red_asm uid qw 04/03/2018 15:16:41 8 1-5000:1
1733262 0.00000 red_asm uid qw 04/03/2018 15:16:41 8 1-5000:1
1733263 0.00000 red_asm uid qw 04/03/2018 15:16:42 8 1-5000:1
1733264 0.00000 red_asm uid qw 04/03/2018 15:16:43 8 1-3323:1
1733265 0.00000 canu_asm uid hqw 04/03/2018 15:16:43 1
canu.out
has:
-- In 'asm_canu1.7.gkpStore', found Nanopore reads:
-- Raw: 158323
-- Corrected: 44330
-- Trimmed: 44293
...
-- Configure RED for 2gb memory.
-- Batches of at most (unlimited) reads.
-- 500000000 bases.
-- Expecting evidence of at most 877111733 bases per iteration.
--
-- Total Reads Olaps Evidence
-- Job Memory Read Range Reads Bases Memory Olaps Memory Memory (Memory in MB)
-- ---- -------- ------------------- --------- ------------ -------- ------------ -------- --------
-- 1 3720.96 1-1 0 0 0.00 0 0.00 1672.96
-- 2 3720.96 2-2 0 0 0.00 0 0.00 1672.96
-- 3 3720.96 3-3 0 0 0.00 0 0.00 1672.96
-- 4 3720.96 4-4 0 0 0.00 0 0.00 1672.96
-- 5 3720.96 5-5 0 0 0.00 0 0.00 1672.96
-- 6 3720.96 6-6 0 0 0.00 0 0.00 1672.96
-- 7 3720.96 7-7 0 0 0.00 0 0.00 1672.96
-- 8 3720.96 8-8 0 0 0.00 0 0.00 1672.96
-- 9 3720.96 9-9 0 0 0.00 0 0.00 1672.96
-- 10 3720.96 10-10 0 0 0.00 0 0.00 1672.96
-- 11 3720.96 11-11 0 0 0.00 0 0.00 1672.96
-- 12 3720.96 12-12 0 0 0.00 0 0.00 1672.96
-- 13 3720.96 13-13 0 0 0.00 0 0.00 1672.96
-- 14 3720.96 14-14 0 0 0.00 0 0.00 1672.96
-- 15 3721.37 15-15 1 36111 0.41 283 0.00 1672.96
...
...
...
-- 158310 3720.96 158310-158310 0 0 0.00 0 0.00 1672.96
-- 158311 3721.26 158311-158311 1 26051 0.30 119 0.00 1672.96
-- 158312 3720.96 158312-158312 0 0 0.00 0 0.00 1672.96
-- 158313 3720.96 158313-158313 0 0 0.00 0 0.00 1672.96
-- 158314 3720.96 158314-158314 0 0 0.00 0 0.00 1672.96
-- 158315 3720.96 158315-158315 0 0 0.00 0 0.00 1672.96
-- 158316 3720.96 158316-158316 0 0 0.00 0 0.00 1672.96
-- 158317 3720.96 158317-158317 0 0 0.00 0 0.00 1672.96
-- 158318 3720.96 158318-158318 0 0 0.00 0 0.00 1672.96
-- 158319 3720.96 158319-158319 0 0 0.00 0 0.00 1672.96
-- 158320 3720.96 158320-158320 0 0 0.00 0 0.00 1672.96
-- 158321 3720.96 158321-158321 0 0 0.00 0 0.00 1672.96
-- 158322 3720.96 158322-158322 0 0 0.00 0 0.00 1672.96
-- 158323 3720.96 158323-158323 0 0 0.00 0 0.00 1672.96
-- ---- -------- ------------------- --------- ------------ -------- ------------ -------- --------
-- 1380064880 8436524
EG it appears to have set off a job for every read, including all of the uncorrected reads that are no longer in the assembly.
I haven't seen this before with previous versions of canu using the same data set and parameters. Is this expected behaviour, or have I screwed something up?
Thanks
John
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4