Grace’s Notebook: May 23, 2017 – Preparing samples for Laura

 

Today Yaamini and I prepared samples for sonication (Wednesday).

Yaamini did her C. gigas, while I prepared Laura’s geoduck.

I did 24 samples:

Screen Shot 2017-05-23 at 2.53.49 PM

We took the original samples from the -80, cut each sample in half, and returned one half to the original sample tube, and the other half to a newly labeled tube.

We washed tools between each use with ethanol and nano pure water.

With the first round, we were using a bleach solution to clean tools and did not rinse with nanowire water. After consulting with Emma, those samples were thrown out as they would not work well. This is why some of the samples taken have * next to them. Not every sample was able to have an exactly corresponding replacement. Some had to come from different enclosures.

Laura’s Notebook: Choosing the right ones.

Geoduck sample selection for next round of MS/MS

This is what I have to work with:

image

Steven, Yaamini & I decided to focus on the DNR Trial #1 for the next round of sample extraction/protein analysis.

To narrow down my sample selection further, I see that the geoduck did not fare well at the Skokomish site’s bare treatment (likely due to predation), as only 2 specimens survived to be sampled, AND Micah said that the eelgrass bed @ that site was in a freshwater seep, and therefore the salinity was significantly different there. I am therefore not moving forward with the SK site

I will select 6 samples from each site/treatment for this next round, ~2 per enclosure. To do so, I used a random number generator to select 2 to 3 replicates from each enclosure.

image

image

The file with my list of selected samples is located in my DNR Repo

from The Shell Game http://ift.tt/2qeTOVP
via IFTTT

Katie’s Notebook: ID highly expressed proteins

My computer really didn’t like dealing with this much data, but I started organizing it all into pivot tables and it seemed to work?

1. First I created a pivot table that summed all the rows labeled with the same protein name giving me one total number for each protein per site. I copied this data into a new sheet.

Screen Shot 2017-05-22 at 9.17.15 PM

2. Then I summed each row and got protein totals across all the sites, which I then sorted by value from largest to smallest.

I put the file with my results in my folder on OWL:
https://128.95.149.83:5001/index.cgi

Still working on figuring out how to upload the other documents.