I’ve got PBJelly successfully compiled and tested (using the procedure from here) so now we’re going to try running it!
First, check my Protocol.xml file. This is the thing that tells PBJelly where to look for the reference scaffolds, Location of new read files, blasr arguments to use (the important one being the # of cores to use), and a job section for each individual input file.
I also built a little shell script to load the various libraries and path requirements for the different programs used in PBJelly to make life easier, since we have to have special local versions of libraries thanks to the lack of root permissions. It’s entitled PBJelly.sh, and is stored in
The shell script is run via
You can test to make sure this worked by echoing the $PATH
Next, I wrote another shell script to run the 6 different sections of PBJelly sequentially, so in theory it should be treated as a single job by the SLURM manager and we won’t get in trouble for wasting resources.
Finally, the scary part. Hitting go.
sbatch -p srlab -A srlab> PBRun.sh
should be enough to begin our script and use only our single node.
After chasing down some XML errors from my protocol (Each in XML) and fixing a missing python library, I think it’s working!
The job initiation isn’t super exciting but the job ID # given to you is important, as it is how you track your job.
Job tracking is done via the
scontrol show job command, and it outputs
with the important parts being the 4th line stating that the Job is still running, and the second to the last line that shows the file that StdOut is being written to, in our case slurm-7091.out which looks like
Now… I guess we wait and keep our fingers crossed?