Grace’s Notebook: DIA 2015 Oysterseed Plan (finish by Apr 17)

Today we had our first WSG meeting with the goal of having all of the projects wrapped up by April 17th.

We will be meeting every Wednesday at 2pm.

Things to focus on for this week:

  • METHODS SECTION
  • Creating easy visuals comparing 23C and 29C

To create Venn Diagram for 23C v 29C:

  1. BLAST proteome used in DIA with uniprot/swissprot database (make a new database with the newest uniprot/swissprot fasta? (#618))
  2. Join blast output with GO and GOslim terms
  3. Join GO and GOslim with skyline output file
  4. Compare terms between 23C and 29C

Steven will take a look at Oyster-Larval-Proteomics paper and capture all the important and interesting stuff.

Methods: Proteomics

Results:
Abundance counts
What proteins are expressed in total
Comparison of proteins expressed between two temps

My notes from meeting:
“` Due by April 17

Point is to bring up what challenges are and get them solved asap —> GITHUB ISSUES

FOCUS ON METHODS JUST ASSOCIATED WITH PROTEOMICS

FOCUS ON GETTING RESULTS

What journal should this go to?

probably first time DIA shotgun proteomics done with gigas oysterseed

next week: have methods done, maybe best three interesting things from results

from Grace’s Lab Notebook https://ift.tt/2TNTAsx
via IFTTT

Sam’s Notebook: Transcriptome Annotation – Geoduck Gonad with BLASTx on Mox

I’ll be annotating the transcriptome assembly (from 20190215) using Trinotate and part of that process is having BLASTx output for the Trinity assembly, so have run BLASTx on Mox.

SBATCH script:

  #!/bin/bash ## Job Name #SBATCH --job-name=blastx_gonad_01 ## Allocation Definition #SBATCH --account=coenv #SBATCH --partition=coenv ## Resources ## Nodes #SBATCH --nodes=1 ## Walltime (days-hours:minutes:seconds format) #SBATCH --time=25-00:00:00 ## Memory per node #SBATCH --mem=120G ##turn on e-mail notification #SBATCH --mail-type=ALL #SBATCH --mail-user=samwhite@uw.edu ## Specify the working directory for this job #SBATCH --workdir=/gscratch/scrubbed/samwhite/outputs/20190318_blastx_geoduck_gonad_01_RNAseq # Load Python Mox module for Python module availability module load intel-python3_2017 # Document programs in PATH (primarily for program version ID) date >> system_path.log echo "" >> system_path.log echo "System PATH for $SLURM_JOB_ID" >> system_path.log echo "" >> system_path.log printf "%0.s-" {1..10} >> system_path.log echo ${PATH} | tr : \\n >> system_path.log wd="$(pwd)" # Paths to input/output files blastx_out="${wd}/blastx.outfmt6" sp_db="/gscratch/srlab/programs/Trinotate-v3.1.1/admin/uniprot_sprot.pep" trinity_fasta="/gscratch/scrubbed/samwhite/outputs/20190215_trinity_geoduck_gonad_01_RNAseq/trinity_out_dir/Trinity.fasta" # Paths to programs blast_dir="/gscratch/srlab/programs/ncbi-blast-2.8.1+/bin" blastx="${blast_dir}/blastx" # Run blastx on Trinity fasta ${blastx} \ -query ${trinity_fasta} \ -db ${sp_db} \ -max_target_seqs 1 \ -outfmt 6 \ -evalue 1e-3 \ -num_threads 28 \ > ${blastx_out} 

Sam’s Notebook: Transcriptome Annotation – Geoduck Ctenidia with BLASTx on Mox

I’ll be annotating the transcriptome assembly (from 20190215) using Trinotate and part of that process is having BLASTx output for the Trinity assembly, so have run BLASTx on Mox.

SBATCH script:

  #!/bin/bash ## Job Name #SBATCH --job-name=blastx_ctendia ## Allocation Definition #SBATCH --account=coenv #SBATCH --partition=coenv ## Resources ## Nodes #SBATCH --nodes=1 ## Walltime (days-hours:minutes:seconds format) #SBATCH --time=25-00:00:00 ## Memory per node #SBATCH --mem=120G ##turn on e-mail notification #SBATCH --mail-type=ALL #SBATCH --mail-user=samwhite@uw.edu ## Specify the working directory for this job #SBATCH --workdir=/gscratch/scrubbed/samwhite/outputs/20190318_blastx_geoduck_ctenidia_RNAseq # Load Python Mox module for Python module availability module load intel-python3_2017 # Document programs in PATH (primarily for program version ID) date >> system_path.log echo "" >> system_path.log echo "System PATH for $SLURM_JOB_ID" >> system_path.log echo "" >> system_path.log printf "%0.s-" {1..10} >> system_path.log echo ${PATH} | tr : \\n >> system_path.log wd="$(pwd)" # Paths to input/output files blastx_out="${wd}/blastx.outfmt6" sp_db="/gscratch/srlab/programs/Trinotate-v3.1.1/admin/uniprot_sprot.pep" trinity_fasta="/gscratch/scrubbed/samwhite/outputs/20190215_trinity_geoduck_ctenidia_RNAseq/trinity_out_dir/Trinity.fasta" # Paths to programs blast_dir="/gscratch/srlab/programs/ncbi-blast-2.8.1+/bin" blastx="${blast_dir}/blastx" # Run blastx on Trinity fasta ${blastx} \ -query ${trinity_fasta} \ -db ${sp_db} \ -max_target_seqs 1 \ -outfmt 6 \ -evalue 1e-3 \ -num_threads 28 \ > ${blastx_out} 

Shelly’s Notebook: Thurs. Mar 14, 2019, Oyster Seed Proteomics

Mapping to SR lab GO slim terms

  • I was unable to get simantic similarity for these terms (which is needed in order to relate proteins to one another through their terms) because they don’t map to terms in the goslim_generic.obo file or to the go data that ontologyX parses. So I’m not going to use these.

Cleaning up analysis for poster figs

See analysis “CreateNodeAnd0.5GoSemSimEdgeAttr_ChiSqPval0.1_ASCA_EvalCutoff.R” to make files to import to cytoscape to generate poster figs.

quick summary of code:

make protein node attribute file

  1. read in fold-change and chisq pvalue data
  2. extract proteins that mapped to uniprot DB with an e-value cutoff of 10^-10
  3. select proteins with FDR-corrected ChiSq prop test pvalue < 0.1
  4. read in ASCA data and combine with ChiSq prop test pvalue 0.1 selected proteins to make a comprehensive list of selected proteins with their GO terms
  5. Translate all GO terms of selected proteins to GO slim terms:
  • use GSEA to make a list of BP and MF GO slims in the data
  • use OntologyX to make a list to GO IDs in the data and other GO IDs (ancester, parent, child) that they map to
  • parse out GO slims from the ‘other GO IDs’ column to make a file that contains the original GO ID and the GO slim ID
  • ignore GO slim terms that are too broad (“GO:0008150 GO:0003674 GO:0005575”)
  • 130 unique proteins remain after steps above

make edge attribute file

  1. Calculate GO semantic similarities for all GO slim terms using OntologySimilarity function, and only select GO relationships that have >=0.5 similarity

made GO node attribute file

  1. Calculate magnitude foldchange normalized by number of proteins to use a GO node attribute

loaded data into cytoscape to make poster figs

Should all stat. methods (clustering, ASCA, and Chi sq. proportion test) be used for selecting altered proteins?

Going back to initial protein selection and determine which methods should be used by looking at the differences in protein abunadance between temperatures of the proteins each method identifies.

  • Originally there were some proteins selected (either by ASCA, Clustering, or ChiSq prop test) that don’t appear to show a signifcant fold change between temperatures see fig in ppt. It would be good to see how these proteins were identified as significantly changed and determine if one method should not be used. Refer to VerifyStatsProteinSelection.R for analysis.

I plotted protein abundances as total number of spectra AND as average NSAF values because the Chi sq proportions test was done on total number of spectra and ASCA was done on NSAF values. There’s no reason ASCA or clustering couldn’t be done on total number of spectra, but we did it on average NSAF values per the pipeline developed by Emma.

Protein abundances (total number of spectra) of ASCA selected proteins

Protein abundances (average NSAF values) of ASCA selected proteins

Protein abundances (total number of spectra) of Chi sq pval 0.1 selected proteins

Protein abundances (average NSAF values) of Chi sq pval 0.1 selected proteins

Protein abundances (total number of spectra) of clustering selected proteins

Protein abundances (average NSAF values) of clustering selected proteins

  • interestingly, some average NSAF values show a different pattern over time than total num spectra do. I don’t really know what this means
  • also these plots show the proteins selected seem to show a difference at least on one day in abundances (whether avg NSAF value or TotNumSpec value) between treatments. So maybe something the first time I plotted this in cytoscape, it wasn’t displaying correctly?
  • clustering only identified one protein uniquely so we don’t gain much by including that method, except for the validation of other methods since 13/14 identified by clustering overlap with other methods
    • kaitlyn proposed we could use clustering after ASCA and prop test selection of proteins
      • we could take all proteins, mapped and unmapped and cluster, then assign a cluster number that can be used in network mapping
        • this might be cool for trying to infer functions of unknown proteins:
        • for each cluster, we could calculate the normalized magnitude foldchange and convert from protein to GO term (for unknown proteins, we would assign the same GO terms as are in the cluster and see how including magnitude FC of unknown compounds changes GO networks. This could be too much of a tangent though.

from shellytrigg https://ift.tt/2USdXBa
via IFTTT

Grace’s Notebook: RNeasy Extraction Day 1 on 24 samples

Today I tried out the new plan for extracting RNA. It took quite a long time and none of the 24 samples had detectable RNA. Details in post.

Set up and preparation

The longest part of this whole thing was labeling tubes.

I labeled:

  • 24 RNase-free snap cap tubes (for the 15ul of slurry)
  • 24 QIA shredder columns (cap and side of tube)
  • 24 gDNA columns
  • 24 RNeasy MinElute columns (had to do this right before use because they’re supposed to be cold… but it took forever, so they probably weren’t cold)
  • 24 1.5ml snap cap tubes that contain the eluted RNA

I prepared solutions for 24 samples plus extra:

  • 70% ethanol (7mL ethanol and 3mL DEPC-H2O)
  • 80% ethanol (10mL ethanol and 2.5mL DEPC-H2O)
  • BufferRLT Plus and B-ME (9mL Buffer RLT and 90ul B-ME)

Sampling out slurry

I selected 24 samples –> two from each of the 12 temp treatments/infection status groups

Tube number sample day infection status temp trtmnt
135 9 0 NA
31 9 0 NA
90 9 1 NA
142 9 1 NA
317 12 0 Amb
342 12 0 Amb
352 12 1 Amb
326 12 1 Amb
242 12 0 Cold
236 12 0 Cold
234 12 1 Cold
212 12 1 Cold
285 12 0 Warm
266 12 0 Warm
271 12 1 Warm
291 12 1 Warm
499 26 0 Amb
506 26 0 Amb
468 26 1 Amb
503 26 1 Amb
458 26 0 Cold
419 26 0 Cold
438 26 1 Cold
455 26 1 Cold

img

This part also took a really long time. For one, finding the tubes in the -80 took some time because I did not place them in there in number order.

Additionally, it took a long time to let them thaw, vortex for a few seconds, and then sample out 15ul of the slurry.

Thawed hemolymph slurry:
img

img

Starting protocol finally (1.5hrs for 24 samples)

  1. Added 250ul of Buffer RLT + B-ME (did under hood in 209 because it smells awful)
  2. Vortexed all for a few seconds
  3. Transfered contents to QIA shredder columns (under hood as well because stinky)
  4. Centrifuge 2min full speed (takes a while putting in and taking out 24 tubes)
  5. Transfer flow-through to gDNA eliminator column with 2ml colletion tubes. Centrifuge 30s at full speed. Discard column. Save flow-through. (While this was happening, I was furiously unwrapping and labeling RNeasy MinElute columns… took a long time… samples sat in centrifuge for a few mins…)
  6. Add 350ul 70% ethanol (pipetted individually). Mix by pipetting.
  7. Transfer sample to RNeasy MinElute column. Close lids. Vacuum.
    img
  8. Add 750ul of Buffer RW1 (used repeat pipet- amazing!). Close lid. Vacuum.
  9. Add 500ul of Buffer RPE (used repeat pipet). Close lid. Vacuum.
  10. Add 500ul 80% ethanol (used repeat pipet). I think I miscalculated my volumes in the preparation, because I ran out of it and had to run and make more. Close lid. Vacuum 5mins. (While vacuuming 5mins, I was labeling the 1.5ml snap cap tubes).
  11. Put RNeasy column in new 1.5ml snap cap. Add 14ul RNase free water to center of membrane. Cut off pink lids. Centrifuge for 1min at full speed.

img

Qubit

Made working solution: 5.6mL Buffer for RNA HS + 28ul RNA HS dye

Made standards: 10ul of each + 190ul working solution

Ran 1ul of each sample (added 199ul working solution)

Vortexed all.

ALL TUBES READ “OUT OF RANGE, TOO LOW

-80
Put the hemolymph pellets that I thawed and used 15ul of in -80 (Rack 7, col 2, row 2)
Put eluted “RNA” samples in -80 (Rack 7, col 3, row 1)

from Grace’s Lab Notebook https://ift.tt/2Yb0Pcr
via IFTTT

Sam’s Notebook: MBD Enrichment – DNA Quantification of C.virginica MBD Samples from 20190312

Finished MBD enrichment on 20190312 using the MethylMiner kit. Since we were out of Qubit assay tubes, I could not quantify these samples when I initially completed the ethanol precipitation. Tubes are in, so I went forward with quantification using the Roberts’ Lab Qubit 3.0 and the 1x High Sensitivity dsDNA assay.

Used 1uL of each sample.

Yaamini’s Notebook: DML Analysis Part 29

Describing general methylation trends

A.K.A. that time I used a lot of bash for loops and awk commands while understanding what I was doing.

Total DNA recovered

I noticed Mac’s 2013 paper included how much of the original input DNA was recovered from MBD procedures. I figured it was a good thing to include in my paper too! In this spreadsheet, I documented the µg of DNA input I used from each sample. In total, I used 37.28 µg. After MBD, the samples were resuspended in 25 µL of buffer. The total yield was 1.42515 µg. This meant that only 3.82% of DNA was recovered after MBD.

Counting reads

One important part of characterizing general methylation trends is to explain how many reads were used for analysis. In this Jupyter notebook, I downloaded FastQC and bismark reports. I originally thought I should download the full trimmed and untrimmed files, but in this issue Steven pointed out that all of that information would be stored in summary reports.

I used a similar series of commands for each report to obtain read counts:

  1. Determine the for loop selects the files I want
 %%bash for f in *zip do echo ${f} done  
  1. Unzip files if needed
 %%bash for f in *zip do unzip ${f} done  
  1. Isolate the number of reads from each report and concatenate in a new file
 %%bash for f in *fastqc do grep "Total Sequences *" ${f}/fastqc_data.txt \ >> 2019-03-17-Untrimmed-Read-Counts.txt done  

I counted untrimmed reads, trimmed reads, and reads that were not mapped to the genome. For the unmapped reads, I subtracted the number of unmapped reads from the number of trimmed reads to obtain the number of reads mapped to the genome. There are 279,681,264 untrimmed reads sequence reads. Of 275,914,272 trimmed paired-end reads, 190,675,298 reads were mapped.

Identify methylated CpG loci

I want to describe general methylation trends, irrespective of pCO2 treatment in my C. virginica gonad data. Claire and Mac both had sections in their papers where determined if a CpG locus was methylated or not. From Mac’s 2013 PeerJ paper:

A CpG locus was considered methylated if at least half of the reads remained unconverted after bisulfite treatment.

They characterized this using methratio.py in BSMAP, but Steven pointed out I could use .cov files in this issue. In this Jupyter notebook, I first identified loci with 5x coverage for each sample by using this awk command in a loop:

 awk '{print $1, $2-1, $2, $4, $5+$6}' ${f} | awk '{if ($5 >= 5) { print $1, $2-1, $2, $4 }}'  

The coverage file has the following columns: chromosome, start, end, methylation percentage, count methylated, and count unmethylated. In the above command, I correct the start and stop positions ($2-1, $2), and add the count methylated and unmethylated ($5+$6) in each file ${f}. If the new fifth column, total count methylated and unmethylated was greater than 5 ($5 >= 5), it meant that I had 5x coverage for that locus. I could then redirect that information to a new file.

The next step was to concatenate all loci with 5x coverage across my five control samples. In this step, I’m essentially mimicing methylKit unite. I used a series of join commands for this task, then used awk to clean up the columns.

Screen Shot 2019-03-18 at 9 15 05 PM

Screen Shot 2019-03-18 at 9 15 20 PM

Screen Shot 2019-03-18 at 9 15 31 PM

The final file with 5x loci across all samples can be found here. By summing the percent methylation information from each sample for each locus, I could characterize its methylation status and separate loci and save these loci in a new file:

Using wc -l to count the number of loci in each file, I had 63,827 total loci across all samples with at least 5x coverage. This is very different from the 14,458,703 CpG motifs across the C. virginica genome. Of the loci with at least 5x coverage, 60,552 were methylated, 2,796 were sparsely methylated, and 479 were unmethylated.

Characterize location of methylated CpG

The last step to describe methylated CpG is to figure out where the 60,552 loci are in the genome! I set up intersectBed in my Jupyter notebook to find overlaps between my methylated loci and exons, introns, mRNA coding regions, transposable elements, and putative promoter regions. When creating my BEDfile, I needed to specify tab delimiters (\t) to get intersectBed to recognize the file:

 awk '{print $1"\t"$2"\t"$3}' 2019-03-18-Control-5x-CpG-Loci-Methylated.bedgraph \ > 2019-03-18-Control-5x-CpG-Loci-Methylated.bed  

Similar to the DML, methylated CpG were primarily located in genic regions (56,055 CpG; 92.6%), with 3,083 unique genes represented. More CpG (40,127; 66.27%) overlaped with exons than with introns (17,510; 28.92%). I found 4,687 CpG (7.74%) in transposable elements and 1,221 CpG (2.02%) in putative promoter regions 1 kb upstream of transcription start sites. There were 2278 methylated loci (3.76%) that did not overlap with exons, introns, transposable elements, or putative promoters. All lists of overlapping loci can be found in this folder.

TL;DR

I had lots of awk and bash coding issues today BUT I FIGURED MOST OF THEM OUT MYSELF! Just in time to go on vacation and lose all of my hard-earned coding prowess.

Going forward

  1. Conduct a proportion test for DML locations
  2. Update paper methods and results
  3. Work through gene-level analysis
  4. Figure out what’s going on with DMR
  5. Figure out what’s going on with the gene background

// Please enable JavaScript to view the comments powered by Disqus.

from the responsible grad student https://ift.tt/2OkxJ6h
via IFTTT

Shelly’s Notebook: Temperature-influenced protein network differences in the Pacific oyster during larval development (2019 CSHL Network Bio poster)

Click here to view or download this poster on FigShare

Click here to see animated network GIF

Click here to give feed back on this poster

Click here to navigate to github repository

poster

ABSTRACT:
TEMPERATURE-INFLUENCED PROTEIN NETWORK DIFFERENCES IN THE PACIFIC OYSTER (CRASSOSTREA GIGAS) DURING LARVAL DEVELOPMENT

Shelly A Trigg1, Kaitlyn R Mitchell1, Emma B Timmins-Shiffman2, Rhonda Elliott3, and Steven B Roberts1
1University of Washington, School of Aquatic and Fishery Sciences,Seattle, WA, 2University of Washington, Department of Genome Sciences,Seattle, WA, 3Taylor Shellfish, Inc., Taylor Shellfish Hatchery, Quilcene,WA

The Pacific oyster has major ecological and economic importance serving as a biofilter and habitat in coastal ecosystems, and contributing over $190M to annual marine aquaculture revenue. However, little is known about the landscape of protein expression during early development, a time when mass mortality is common which can negatively impact industry and ecosystems. To better characterize physiological pathways and associated networks active during oyster development we performed a developmental time series proteomics analysis of larval cultures reared at 23°C and 29°C. These temperatures were selected based on the reports from the aquaculture industry that differential performance is observed in oysters at these temperatures. We found differentially abundant proteins among larval cultures reared at different temperatures and a larval culture that experienced mass mortality. Results from functional protein network analyses provide deeper insight into the mechanisms underlying fundamental developmental processes and mortality events. This proteomics analysis combined with survival and development observations offers greater clarity on environmental conditions that can improve aquaculture production.

from shellytrigg https://ift.tt/2UE9Vwn
via IFTTT

Yaamini’s Notebook: DML Analysis Part 28

Revisiting DML analyses

Perfecting the DML track

Based on this issue, I revised the destranded 5x DML track to show methylation differences instead of a false forward strand indiciation. Turns out I applied mutate on the wrong dataframe in my R Markdown script! My revised track, found here, matched the track Steven generated.

Untitled 2

Figure 1. DML tracks in IGV with 5x and 10x coverage sample tracks.

This is also interesting because Steven used filterByCoverage after processBismarkAln to generate the 5x coverage track, while I used mincov within processBismarkAln. It is interseting to know that we (presumably) get the same DML locations. I decided to go forward with the destranded 5x DML track, since both Claire and Mac used 5x coverage for their analyses.

CpG information

I returnedd to my R Markdown script and got CpG coverage and methylation information for the 5x processed samples.

s9-cpgcoverage

s9-percentmethylation

Figures 2-3. Example CpG coverage and percent methylation plots.

I also generated a correlation plots, full sample methylation clustering, PCA, and screeplots within methylKit.

full-sample-correlation

full-sample-clustering

full-sample-PCA

full-sample-screeplot

Figures 4-7. Full sample plots.

Characterizing DML locations

I revisited my bedtools Jupyter notebook to characterize the location of destranded 5x DML. I reset the variable path name (side note: HOLY FORK SUCH A HANDY TOOL I’M SO GLAD I USED IT) to the new destranded 5x track. I then went through the script and reran all of my DML code. All of the overlap locations can be found here. I also reran my flankBed and closestBed analysis, and saved the output here.

The track had 630 DML, instead of the 1398 in my 3x coverage track. Of the 630 DML, were 335 hypermethylated in the treatment and were 295 hypomethylated. Most of the DML, 576, were located in mRNA coding (genic) regions, with 1474 genes represented. In the genic regions, 388 DML overlapped with exons and 200 overlapped with introns. 61 DML overlapped with transposable elements generated using all species in the database, and 40 DML overlapped with transposable elements generated using C. gigas only. In 1000 bp flanking regions upstream and downstream of mRNA coding regions, 46 DML overlapped with upstream flanks and 50 overlapped with downstream flanks.

Going forward

  1. Conduct a proportion test for DML locations
  2. Describe methylation irrespective of treatment
  3. Update paper methods and results
  4. Work through gene-level analysis
  5. Figure out what’s going on with DMR
  6. Figure out what’s going on with the gene background

// Please enable JavaScript to view the comments powered by Disqus.

from the responsible grad student https://ift.tt/2Ckf8T1
via IFTTT

Grace’s Notebook: Bairdi hemolymph extraction plan (work in progress)

Here’s the general extraction plan. There will be a final version of this extraction plan of actually selecting the tube numbers, and taking into account whatever feedback I get.

Number of samples to process

Based on previous RNeasy processes (Feb 15th, and Feb 20th), we have decided that using 50ul or less of the hemo slurry is best with the RNeasy Kit.

I’ll use 20ul of the hemolymph slurry from about 150 individual sample tubes. (Number of tubes processed will depend on the RNA yield we get)

I’ll process 8 samples at a time, and will select tube numbers such that each infection status and temperature treatment group has 8 samples selected, along with 4 back-up tubes in case there isn’t detectable RNA in any of the samples.

To start, we’ll do 12 sets of 8 tubes, running 1ul of each sample on the Qubit at the end of each set, keeping track of which pools need extra RNA. Then, after the 12 sets of 8, I’ll collect whatever extra tubes we need from the back up list that I’ll make and decide how many additional sets need to be processed.

The goal is to get to the point where I have 12 pooled samples, each with at least 1000ng RNA in 50ul of DEPC-treated H20.

Protocol preparation

Q: Should I prep huge containers of reagents or should I prep them fresh before each extraction?

Make:
70% ethanol
80% ethanol
Buffer RLT plus 2-BME

RNeasy

  1. Add 350ul of Buffer RLT Plus + B-ME solution to both samples
  2. Vortex to mix
  3. Transfer lysate to QIA Shredder column with 2ml collection tube. Centrifuge 2min at full speed
  4. Transfer flow-through to gDNA Elimninator column with 2ml collection tube. Centrifuge for 30s at 12,00 g. Discard column. Save flow-through. Also save gDNA column for later use.
  5. I measured the amount of flow-through for both samples and added that same volume’s worth of 70% ethanol. THe 10ul starting material sample had 340ul, so I added 340ul of 70% ehtanol. The 50ul sample had 345ul, so I added 345ul of 70% ehtanol. Mix by pipetting.
  6. Transfer sample (including any precipitate that may have formed) to RNeasy MinElute column. Close Lid. Centrifuge 20s at 12,000g. Discard flow-through
  7. Add 700ul Buffer RW1 to RNeasy column. Close lid. centrifuge 20s at 12,000g. Discard flow-through.
  8. 500ul Buffer RPE to RNeasy column. Close lid. Centrifuge 20s at 12,000g. Discard flow-through.
  9. Add 500ul 80% ethanol to RNeasy column. Close lid. Centrifuge 2min at 12,000g. Dsicard collection tube and flow-through.
  10. Put RNeasy column in new 2ml collection tube. Cut off RNeasy column lid. Keep tube open and centrifuge at full speed for 5min. Discard flow-through.
  11. Put RNeasy column in new 1.5 ml collection tube. Add 14ul RNase-free water (from red-capped aliquotted tube) to center of membrane (I missed the center for the 50ul sample). I forgot to close lid, so the tubes were centrifuged open for 1min at full speed.

Qubit
Run 1ul of each sample tube on the Qubit using RNA High Sensitivity
KEEP TRACK OF THE TUBE NUMBERS

Upload the Qubit results from each set of 8 immediately to my laptop, add the tube numbers manually, and save.

from Grace’s Lab Notebook https://ift.tt/2uaUcck
via IFTTT