Sam’s Notebook: Annotation – Olurida_v081 MAKER Proteins BLASTp

from Sam’s Notebook http://bit.ly/2Cs3RR0
via IFTTT

Sam’s Notebook: DNA Isolation – C.gigas Ploidy Experiment Ctenidia

Yesterday, Ronit initiated DNA isolation from ctenidia samples from his experiment (Google Sheet) from the following four samples:

  • D11
  • D12
  • D13
  • D14

Frozen tissue was excised from frozen tissue block via razor blade (weight not recorded) and pulverized under liquid nitrogen. Samples were incubated O/N @ 37oC (heating block) in 350uL of MB1 Buffer + 25uL Proteinase K, per the E.Z.N.A. Mollusc DNA Kit (Omega) instructions.

After the O/N incubation, I processed the samples according to the E.Z.N.A. Mollusc DNA Kit (Omega) with the following notes:

Samples were eluted in 100uL of Elution Buffer and quantified with the Roberts Lab Qubit 3.0 and the Qubit dsDNA BR Assay (Invitrogen), using 2uL of each sample.

Samples were stored in “Ronit’s gDNA Box #1 (positions B2 – B5)” in the FTR213 -20oC freezer.

Sam’s Notebook: qPCR – Relative mitochondrial abundance in C.gigas diploids and triploids subjected to acute heat stress via COX1

Using the C.gigas cytochrome c oxidase (COX1) primers I designed the other day, I ran a qPCR on a subset of Ronit’s diploid/triploid control/heat shocked oyster DNA that Shelly had previously isolated and performed global DNA methylation assay. The goal is to get a rough assessment of whether or not there appear to be differences in relative mitochondrial abundances between these samples.

I used 50ng (2uL) of DNA in each qPCR reaction. The DNA had been previously diluted to 25ng/uL by Shelly when performing her DNA methylation assay (Google Sheet), however I did need to prepare a dilution for sample T02 (control, triploid), as there wasn’t an existing 25ng/uL dilution in her box:

  • 5.54uL stock DNA (27ng/uL )
  • 0.46uL H2O

Grace’s Notebook: BLAST nt taxonomy with C bairdi had no taxonomy data

Today I checked on my finished BLAST on Mox with my assembled C. bairdi transcriptome and the nt taxonomy database. It finished after three-ish days, and the file was large. However, all of the taxonomy cells were “N/A”. I made a GitHub Issue, and got some input and am now BLAST-ing again. Additionally, Sam posted a link for me to look into to get Order, Family, etc. taxonomy information on my next BLAST that I will get going before I leave for CA.

nt taxonomy BLAST: missing info

GitHub Issue 521

My output .tab file looked like this:
img

And another look in R (the script in the image was not saved as it is a copy of my original script):
img

I changed the export command to export BLASTDB=/gscratch/srlab/blastdbs/ncbi-nr-nt-20181114/, because this is where the nt and taxonomy databases exist.

Re-running BLAST now, and it will be done in a couple of days.

BLAST nt taxonomy with Order tax infor

GitHub Issue 513

Currently, the output format is: -outfmt "6 qseqid sseqid evalue bitscore staxids sscinames scomnames sskingdoms" \, but I want to add more specific information like Order (DecaPod) and maybe Family.

URL to where I can find the information to do this as provided by Sam: https://ift.tt/2CiBnsE

Will work on this this week so that it will be running before I get on my plane Friday.

from Grace’s Lab Notebook https://ift.tt/2SXzFmi
via IFTTT

Sam’s Notebook: BLASTx – Clupea pallasii (Pacific herring) liver and testes transcriptomes on Mox

Apparently we’ve had some interest in the Pacific herring transcriptomes (liver and testes) we produced many years ago. As such, Steven asked that I do a quick BLASTx to help annotate these two transcriptomes.

Two FastA files were downloaded from FigShare to Mox with the following commands:

Liver transcriptome:

wget https://s3-eu-west-1.amazonaws.com/pfigshare-u-files/88394/HerringHepaticTranscriptome34300contigs.fa

Testes transcriptome:

wget https://s3-eu-west-1.amazonaws.com/pfigshare-u-files/88395/HerringTesticularTranscriptome31545contigs.fa

Used the UniProtKB database that I downloaded on 20181008 as the blast database.

Here are the SBATCH script files used to run these jobs:

Liver job:

Testes job:

Since the two scripts are fairly short in length, I’ll put the full contents below.

Liver job:

  #!/bin/bash ## Job Name #SBATCH --job-name=herring_blast ## Allocation Definition #SBATCH --account=srlab #SBATCH --partition=srlab ## Resources ## Nodes #SBATCH --nodes=2 ## Walltime (days-hours:minutes:seconds format) #SBATCH --time=02-0:00:10 ## Memory per node #SBATCH --mem=120 ##turn on e-mail notification #SBATCH --mail-type=ALL #SBATCH --mail-user=samwhite@uw.edu ## Specify the working directory for this job #SBATCH --workdir=/gscratch/srlab/sam/outputs/20181212_blastx_herring_liver # Load Python Mox module for Python module availability module load intel-python3_2017 # Document programs in PATH (primarily for program version ID) date >> system_path.log echo "" >> system_path.log echo "System PATH for $SLURM_JOB_ID" >> system_path.log echo "" >> system_path.log printf "%0.s-" {1..10} >> system_path.log echo ${PATH} | tr : \\n >> system_path.log # Copy SBATCH script to working directory cp /gscratch/srlab/sam/sbatch_scripts/20181212_herring_liver_blastx.sh . # Make blast database available to blast export BLASTDB=/gscratch/srlab/blastdbs/UniProtKB_20181008/ # Set variables ## BLASTx blastx="/gscratch/srlab/programs/ncbi-blast-2.6.0+/bin/blastx" ## UniProt database uniprot="/gscratch/srlab/blastdbs/UniProtKB_20181008/20181008_uniprot_sprot.fasta" liver_fasta="/gscratch/srlab/sam/data/C_pallasii/transcriptomes/HerringHepaticTranscriptome34300contigs.fa" # Run blastp on UniProt database ${blastx} \ -query ${liver_fasta} \ -db ${uniprot} \ -max_target_seqs 1 \ -outfmt 6 \ -num_threads 56 \ > 20181212.herring.liver.blastx.outfmt6 \ 2> 20181212.herring.liver.blastx.err  

Testes job:

  #!/bin/bash ## Job Name #SBATCH --job-name=herring_blast ## Allocation Definition #SBATCH --account=coenv #SBATCH --partition=coenv ## Resources ## Nodes #SBATCH --nodes=2 ## Walltime (days-hours:minutes:seconds format) #SBATCH --time=02-0:00:10 ## Memory per node #SBATCH --mem=120 ##turn on e-mail notification #SBATCH --mail-type=ALL #SBATCH --mail-user=samwhite@uw.edu ## Specify the working directory for this job #SBATCH --workdir=/gscratch/srlab/sam/outputs/20181212_blastx_herring_liver # Load Python Mox module for Python module availability module load intel-python3_2017 # Document programs in PATH (primarily for program version ID) date >> system_path.log echo "" >> system_path.log echo "System PATH for $SLURM_JOB_ID" >> system_path.log echo "" >> system_path.log printf "%0.s-" {1..10} >> system_path.log echo ${PATH} | tr : \\n >> system_path.log # Copy SBATCH script to working directory cp /gscratch/srlab/sam/sbatch_scripts/20181212_herring_testes_blastx.sh . # Make blast database available to blast export BLASTDB=/gscratch/srlab/blastdbs/UniProtKB_20181008/ # Set variables ## BLASTx blastx="/gscratch/srlab/programs/ncbi-blast-2.6.0+/bin/blastx" ## UniProt database uniprot="/gscratch/srlab/blastdbs/UniProtKB_20181008/20181008_uniprot_sprot.fasta" testes_fasta="/gscratch/srlab/sam/data/C_pallasii/transcriptomes/HerringTesticularTranscriptome31545contigs.fa" # Run blastp on UniProt database ${blastx} \ -query ${testes_fasta} \ -db ${uniprot} \ -max_target_seqs 1 \ -outfmt 6 \ -num_threads 56 \ > 20181212.herring.testes.blastx.outfmt6 \ 2> 20181212.herring.testes.blastx.err 

Sam’s Notebook: FastQC and Trimming – Metagenomics (Geoduck) HiSeqX Reads from 20180809

Steven tasked me with assembling our geoduck metagenomics HiSeqX data. The first part of the process is examining the quality of the sequencing reads, performing quality trimming, and then checking the quality of the trimmed reads. It’s also possible (likely) that I’ll need to run another round of trimming. The process is documented in the Jupyter Notebook linked below. After these reads are cleaned up, I’ll transfer them over to our HPC nodes (Mox) and try assembling them.

Jupyter Notebook (GitHub):

Shelly’s Notebook: Wed. Dec 12, 2018

Geoduck Broodstock Experiment

Water chemistry

  • Steven calibrated all Apex pH probes
  • Titrator pH calibrations were inaccurate again. After 3 attempts decided to poison samples and not run the titrator. pH calibration data here.

Attempt 1 @10am:
| standard | mV | pH |
|———-|——–|——|
| 4.0 | 164.3 | 4.12 |
| 7.0 | 12 | 6.79 |
| 10.0 | -154.2 | 9.7 |

Attempt 2 @2pm:
| standard | mV | pH |
|———-|——–|——|
| 4.0 | 167.1 | 4.08 |
| 7.0 | 8.4 | 6.85 |
| 10.0 | -159.6 | 9.79 |

Attempt 3 @3pm:
| standard | mV | pH |
|———-|——–|——|
| 4.0 | 167.5 | 4.07 |
| 7.0 | 6.5 | 6.89 |
| 10.0 | -159.0 | 9.78 |

***need to order new pH probe

  • Kaitlyn took discrete measurements and 125mL water samples from all 6 tanks starting at 4:40pm. Added 50uL mercuric chloride to all bottles, and stored in box below titrator.
  • I downloaded Apex data

Respirometry:

  • Set-up (took 3 hours and started setting up at 11am)
    1. Finish preparing 7 new chamber lids
      • cut and add clear hose to lids for filling chambers (just cut the existing hose on Sam’s 4 lids in half)
      • flex tape the corner cracks near the pump cord in Sam’s 4 lids because the silicon doesn’t make a good seal
      • number Sam’s 4 pump plugs with paint pen
      • have 11 total chambers
    2. Rig up outlets for powering all the pumps
    3. Make 2 additional platforms for chambers to stand on while in water bath (tanks)
    4. Move geoducks to make space for platforms and chambers
    5. Add treatment water to all chambers
    6. Match up O2 and temp. sensors to chamber numbers and fix in cable glands on chamber lids
    7. Power up pumps and make sure they run
    8. Determine biovolume of randomly selected animals and PVC stand to go in chambers
      • Biovolume data
        • This would benefit from a more accurate volume measuring method (e.g. a 5L beaker or a 10L beaker. We are currently using the chamber itself to estimate biovolume of these large animals and the resolution is poor.
    9. Top off chambers by pumping treatment water in through clear hose and cap hoses
    10. Tighten all cable glands
  • Started trial by ~2pm, and stopped at 3:30pm

***need to measure total chamber volume in order to normalize animals

  • Clean-up (~1 hour)
    1. replace animals to their original tanks
    2. remove probes from lids and soak in fresh water (used one chamber for this)
    3. hose off platforms, chambers, and chamber lids
    4. rinse pumps with fresh water
    5. wipe probes dry with paper towel, then wipe with 70% ethanol. Coil up and place in original plastic bags
    6. export data from PreSens software

from shellytrigg http://shellytrigg.github.io/30th-post/
via IFTTT