Transcriptome of fine flounder:

https://www.ncbi.nlm.nih.gov/sra/SRX612429

Genomes of japanese flounder:

https://www.ncbi.nlm.nih.gov/assembly/GCF_001970005.1/#/def

https://www.ncbi.nlm.nih.gov/assembly/GCA_001904815.2#/def

Running PBJelly on Hyak:

I’ve got PBJelly successfully compiled and tested (using the procedure from here) so now we’re going to try running it!

First, check my Protocol.xml file. This is the thing that tells PBJelly where to look for the reference scaffolds, Location of new read files, blasr arguments to use (the important one being the # of cores to use), and a job section for each individual input file.

Screen Shot 2017-04-12 at 12.11.17 PM

I also built a little shell script to load the various libraries and path requirements for the different programs used in PBJelly to make life easier, since we have to have special local versions of libraries thanks to the lack of root permissions. It’s entitled PBJelly.sh, and is stored in /gscratch/srlab/scripts

Screen Shot 2017-04-12 at 11.51.21 AM

The shell script is run via source /gscratch/srlab/scripts/PBJelly.sh

You can test to make sure this worked by echoing the $PATH

Screen Shot 2017-04-12 at 12.15.20 PM

Next, I wrote another shell script to run the 6 different sections of PBJelly sequentially, so in theory it should be treated as a single job by the SLURM manager and we won’t get in trouble for wasting resources.

Screen Shot 2017-04-12 at 12.20.10 PM

Finally, the scary part. Hitting go.

sbatch -p srlab -A srlab> PBRun.sh

should be enough to begin our script and use only our single node.

After chasing down some XML errors from my protocol (Each in XML) and fixing a missing python library, I think it’s working!

The job initiation isn’t super exciting but the job ID # given to you is important, as it is how you track your job.

Screen Shot 2017-04-12 at 1.35.54 PM

Job tracking is done via the scontrol show job command, and it outputs

Screen Shot 2017-04-12 at 1.37.31 PM

with the important parts being the 4th line stating that the Job is still running, and the second to the last line that shows the file that StdOut is being written to, in our case slurm-7091.out which looks like

Screen Shot 2017-04-12 at 1.39.18 PM

Now… I guess we wait and keep our fingers crossed?

Kaitlyn’s Notebook: BLAST on Jupyter

I went out with the #LabLadies and shucked my first oyster! I also really enjoyed looking around the Manchester facility and working with everyone! (Thanks for inviting me guys!)

I’ve also finally figured out how to work Github and Jupyter. I’ve now successfully ran a file through BLAST using Jupyter, although it was a practice file downloaded from the internet rather than the pacificoysterdata. I still have to figure out to BLAST that file or if that is the correct file to BLAST. Anyway, I was having problems because I wasn’t specifying the entire path but with a little help from Sam, I finally got it figured out! I also created my first repository, and although it looks pretty empty right now, I’ve moved directories in and out as well as individual files. I’m using the terminal to do this. I did download GitHub Desktop but because I was already working in the terminal so much, it made more sense to me. I understand a lot more of the terminology now as well as how GitHub tracks file changes in your computer. It was pretty exciting getting it all to finally work for me!

Finally, I’m working on an anemone project for my BIO463 (Advanced Physiology) class. I am going to manipulate salinity (hypoosmotic conditions) and temperature (increase) and then test tentacle flexion, tentacle retraction time, changes in symbiote presence and possible tentacle regeneration time of Aiptasia. I ordered them from Carolina Catalogs and they are surprisingly large (about 5cm)! Unfortunately, their symbiote presence will probably prevent the ability to look at methylation patterns however I will learn a lot about anemones for this project and hopefully I can do a separate project  studying changes in global methylation patterns for the Roberts Lab.

Jupyter on Hyak: Not yet.

I’ve been trying to get Jupyter notebook to work on Hyak, and something is wonky in the directions, or ports open for Hyak that doesn’t allow for connections on a specified port.

Using the Old IKT Hyak directions here (The Mox directions link back to the how-to page) I tried the following in a terminal window:

ssh -N -f -L localhost:8899:localhost:8899 seanb80@mox.hyak.uw.edu

In another terminal window I followed with

module load anaconda2_4.3.1

and then

jupyter notebook --no-browser --port=8899

which seems to load fine.

I then open yet another terminal window and begin an interactive node via

srun -p srlab --pty /bin/bash

and attempt the required ssh portal to the interactive node via

ssh -N -f -L localhost:8899:localhost:8899 n2196

Where n2196 corresponds to the interactive instance I just opened.

This results in an error in the window the tunnel is opened of

bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 8899
Could not request local forwarding.

as well as an error in the window where the initial tunnel to Mox is opened of:

channel 2: open failed: connect failed: Connection refused

I’ve emailed the Hyak people, and hopefully they’ll get back soon with a workaround/what I’m doing wrong and we can make a tiny step forward.

The learning curve for Hyak just keeps getting steeper…