The K-10 low pH and K-10 amb pH populations spit out larvae today. Here’s how I know – you can see the larvae collecting against the banjo @ water level in this K-10-amb pH group:
And in the water by shining a flashlight against the bucket:
And in the K-10-low pH group:
I collected the larvae using a 100um screen, with a 200um screen on top to filter out some of the debris. I then poured into a tri-pour, filling to 300 mL:
Using a plunger to thoroughly mix the larvae in the 300 mL, I pulled 3 x 500uL count samples into a well plate, and after checking out the larvae under the scope for a minute…
I added 1 drop of Lugols to each for counting purposes (this kills the larvae, and makes counting very easy):
It’s been 9 days since I moved the broodstock into their own buckets. Therefore, I cannot assume that these larvae’s genitors are from their own group. While I will not rear these larvae through settlement, I want to count the to see how many eggs (which are definitely from this population) were produced. I’ll post counts on my notebook, and will also update this Larval Collection Spreadsheet
Larval counts: Sampled 500 uL three times, total volume sampled from is 300 mL
K-10-Amb pH: 80, 77, 80 = ave 79 larvae / 0.5 mL = 158 larvae / mL * 300 mL = 47,400 larvae
K-10-low pH: 52, 57, 77 = ave 62 larvae / 0.5 mL = 124 larva / mL * 300 = mL = 37,200 Larvae
Also, to test my rearing buckets, I’ll keep these larvae around for a bit:
Other things that happened today:
- Installed in-line cartridge filters (20um, 5um, 1um)
- PSRF moved from Reeds Shellfish Diet to a live algae cocktail yesterday (5/10); things look WAY cleaner today on the live algae diet
- Set up several larval rearing buckets, and compiled a list of things I need to purchase to make 24 complete setups.
- Installed immersion heater in header tank, and set temperature to 18degC. The temperatures in all tanks had increased to ~19.5degC over the past couple days, so to test the heater’s abilities I opened several valves full-blast and added only ambient (~11degc) water. After I had brought the temp down to ~16.5degC, I set the mixing valve temp to 14degC. Here’s how things look after a few hours:
Look at all the fecal deposits that the broodstock have produced in only 2 days!:
from The Shell Game http://ift.tt/2qbGhlk
Record scratch. Freeze frame.
Hi. I bet you’re wondering how I got here. Yaamini, I thought you said you selected targets yesterday? Your lab notebook says so! Well, I was a fool and tried calculating a bunch of coefficients of variance. And like Mr. T, I pity the fool.
At about 3 a.m., I realized that this coefficient of variance method wasn’t statistically sound! Better late than never, right? I spoke with Emma this morning and she reminded me of MSStats. She used it before for DDA analysis to identify differentially expressed proteins (i.e. potential targets). I’m going to use MSStats to identify potential targets. Then, I’ll merge that list with my table of protein information (ex. GO terms, functions, processes, etc.) to learn more about them.
A huge shoutout to all of the stats classes I’ve ever taken for making me realize I wasn’t doing things properly before it was too late!
(buyer beware: entry in progress, information below not final)
In this notebook, I detail the MSStats process, from adding it as a tool in Skyline to running analyses in R. I’m basing my analyses off of information in the MSstats manual.
from the responsible grad student http://ift.tt/2qdEril
Since file transfers to/from Hyak are being wonky and I already have the files uploaded, I’m going to start Andrew Spanjer’s salmon trinity assembly.
First, I need to make a new BlastX database using files provided by Andrew.
/gscratch/srlab/programs/ncbi-blast-2.6.0+/bin/makeblastdb -in SalmonUni.fasta -parse_seqids -dbtype prot
Building a new DB, current time: 05/12/2017 09:26:38
New DB name: /gscratch/srlab/data/andrew-trinity/SalmonUni.fasta
New DB title: SalmonUni.fasta
Sequence type: Protein
Keep MBits: T
Maximum file size: 1000000000B
Adding sequences from FASTA; added 141125 sequences in 5.81965 seconds.
Andrew supplied me with his two data files, all_val_1.fq.gz and all_val_2.fq.gz, so I threw those in to a slum batch file and fired up trinity via slurs..
[seanb80@mox1 andrew-trinity]$ cat TrinRun.sh
## Job Name
## Walltime (ten minutes)
## Memory per node
## Specify the working directory for this job
Trinity --seqType fq --left all_val_1.fq.gz --right all_val_2.fq.gz --CPU 50 --trimmomatic --max_memory 350G
[seanb80@mox1 andrew-trinity]$ sbatch -p srlab -A srlab TrinRun.sh
Submitted batch job 13312
[seanb80@mox1 andrew-trinity]$ scontrol show job 13312
UserId=seanb80(557445) GroupId=hyak-srlab(415510) MCS_label=N/A
Priority=276 Nice=0 Account=srlab QOS=normal
JobState=RUNNING Reason=None Dependency=(null)
Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0
RunTime=00:00:04 TimeLimit=20-00:00:00 TimeMin=N/A
StartTime=2017-05-12T09:56:17 EndTime=2017-06-01T09:56:17 Deadline=N/A
PreemptTime=None SuspendTime=None SecsPreSuspend=0
NumNodes=1 NumCPUs=28 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
MinCPUsNode=1 MinMemoryNode=350G MinTmpDiskNode=0
OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)
top in another window, everything seems to be running ok, but
trimmomatic seems to only be using ~ 8 cores, hopefully
trinity will be better
With demultiplexed files in Skyline I can export my results to .csv file for analysis. While I do still need to create a Retention Time Calculator and apply to data in Skyline, I’m taking an initial stab at finding differentially expressed proteins.
First step, define the metrics that I want to export:
Opening the resulting .csv file in Excel, and it looks like this, with Area & Peptide Peak Found Ratio for each sample in separate columns.
For my initial analysis I’m going to use Excel- I’ll graduate up to R soon I promise myself!- for the meantime, I used a Pivot Table to pull ** Sum Area by Protein & Sample #**:
I need to normalize the sum area based on the total amount of protein, aka to remove any differences based on the amount of protein we loaded into each sample. We can do this with the Total Ion Current (TIC) for each sample file, which Emma provided for us. Here’s a screen shot of the file she provided:
On a new tab in my .xls spreadsheet I transposed the TIC data, then pulled all the Sum Area data from my pivot table into this new tab, normalizing it by dividing by the TIC for each sample.
It looks like an enormous # of peptides peaks were not identified by Skyline. I’m hoping that my RT calculator will improve this…
from The Shell Game http://ift.tt/2pqZpwY