EC2 and You, pt. 2: Uploading files and installing stuff

We’ve got our instance up and running, now it’s time to put some files there. We need data to make this all work!

I’m using Laura’s stress related geoduck reference proteome as my example, but this should work with anything.

First, SSH in to your EC2 instance and make some directories where you’ll put stuff.

Screen Shot 2017-03-14 at 10.55.52 AM

Screen Shot 2017-03-14 at 10.56.00 AM

Those look like some directories. Next we need to copy stuff over. I use scp to copy everything over, since I have copies on my local hard drive. You can use curl, wget, or whatever you like.

Screen Shot 2017-03-14 at 11.03.57 AM

Screen Shot 2017-03-14 at 11.04.21 AM

Those are some files! Next I install percolator.

Screen Shot 2017-03-14 at 11.07.51 AM

the pertinent command being dpkg -i percolator-v3-01-linux-amd64.deb

Next, we install pecan.

I didn’t take as many screenshots of this, but it’s referenced in other posts. Short take: Unzip pecan, edit the config file located in /pecan/PECAN/PecanUtil to look like:

Screen Shot 2017-03-14 at 11.37.20 AM

Your proteome file names and locations may change, but the idea is the same.

then go to /pecan/ and run sudo python setup.py install. After a brief time, the build completes and you should be able to test things by running bare commands

Screen Shot 2017-03-14 at 11.40.09 AM

Success! Now on to configuring Starcluster/Open Grid Engine. The not fun part of all of this.

EC2 and You:

Wanted to do a quick walkthrough of starting a new AWS EC2 instance and uploading data. A little more context: I’ve been looking at exporting the Pecan analysis to something with a whole lot more RAM, and it looks like AWS might be a decent idea.

First, sign up for AWS. I used https://aws.amazon.com/education/awseducate/

Then og in and answer all of the questions. Pretty standard stuff, who you are, where you’re from, why should we let you use our computers. etc

Navigate to the EC2 Dashboard. It should look about like this. The link to the Dashboard itself is in the upper left corner:

Screen Shot 2017-03-14 at 10.11.03 AM

We want to start a new instance, so click the Launch New Instance button.

Screen Shot 2017-03-14 at 10.14.30 AM

We want to use a Starcluster AMI, which is not part of the “default” set of images, so we need to go to the Community AMIs option, found on the left side in the middleish of the window. Search for Starcluster there.

Screen Shot 2017-03-14 at 10.17.57 AM

I choose the option “CosmoBox-StarCluster-2015-5-28”. It uses an older version of Ubuntu, 14.04 vs the 16.04 we use on Emu and Roadrunner, but hopefully this won’t be an issue.

Next we get to choose our instance. This is where money starts becoming a consideration. Ideally, we’d be playing with a Memory Optimized r4.8xlarge with 32vCores and 244gb of ram, but amazon wants money for that, so we’re building a toy instance on t2.micro instance with 1vCore and 1gb of memory.

Screen Shot 2017-03-14 at 10.23.21 AM

Next we can configure the instance, this window allows us to do two main things, launch multiple cloned instances, if we want to do multiple Pecan sample sets concurrently and set up Spot price bidding. My understanding is that AWS has two pricing schedules, On-Demand and Spot. On Demand is just like it sounds, you get it when you want it, but at a higher cost. Spot allows you to bid a specific amount, say $0.40/CPU hour, and your instance won’t launch until the floating spot price reaches that value. Since we’re playing on the free tier, and only need one instance we just click next here.

Next we can add storage. The free tier of EC2 defaults to 8gb of space, but is allowed up to 30gb. I’m greedy, so I pick 30.

Screen Shot 2017-03-14 at 10.27.35 AM

After hitting next, we have the option to add tags to our instance. This is, as far as I can tell, for organizational purposes only.

Screen Shot 2017-03-14 at 10.29.52 AM

Finally, it wants us to set firewall settings for access to the instance. Since this is a toy instance, I’m not playing with this.

After hitting next we’re invited to review the settings we chose, and launch the instance.

Screen Shot 2017-03-14 at 10.31.48 AM

We’re asked to make a public/private key pair after hitting launch. This is important as it operates as the password for accessing your instance via ssh. Don’t lose the key it gives you via downloading a text file. Losing the text file with your private key is akin to losing the key to your house, only there are no digital locksmiths.

After downloading the key, hit launch. First you’ll see a launch status page, click the view instances button on the bottom right of the page to go to the main instances page. You’ll see the instance state is “Pending”, which means EC2 is starting your instance. After a minute or so, this will change to running.

Screen Shot 2017-03-14 at 10.35.47 AM

Screen Shot 2017-03-14 at 10.35.56 AM

Hooray. Your instance is live.

Access is similar to SSHing in to any Roberts Lab computer, only you have to provide that private key. Select your instance and then click connect to find out what you have to do to connect to your instance.

Screen Shot 2017-03-14 at 10.40.06 AM

If you fail to do the chmod 400 step on your private key, you’ll see this error.

Screen Shot 2017-03-14 at 10.42.46 AM

After changing the permissions I encounters an odd thing on AWS’s end. I copied the ssh command from the example in the Connect page, and it’s unhappy with that, wanting me to log in as user “ubuntu” rather than “root”. This is safer anyway, as you can really mess stuff up running as root always.

Screen Shot 2017-03-14 at 10.43.48 AM

After changing that, I try and log back in and am greeted with a nice splash screen, showing some information about the instance, what comes pre-loaded, etc. From this point on it operates just like any ssh instance.

Screen Shot 2017-03-14 at 10.45.39 AM

Coming next: Copying files, installing Pecan, and testing Open Grid Engine(OGE) vs Sun Grid Engine(SGE)

Unrelated to EC2 stuff, I’ve been working on methpipe stuff for Hollie and am running out of space on my laptop so I decided to move a bunch of intermediate files over to Owl for storage. It makes me miss having ethernet access to the UW network for transferring huge files. This is slow.

Screen Shot 2017-03-14 at 10.48.27 AM

Yaamini’s Notebook: Manchester Day 26

A couple hiccups at Manchester today

When Grace, Olivia and I walked in this morning, the salinity and pH probes were frozen on the following screens:

img_7025

Figure 1. The salinity probe read “-L” in both seawater and freshwater.

img_7026

Figure 2. The pH probe was frozen on this screen and was unresponsive when I tried pressing any button.

I unplugged the pH probe and plugged it back in, and it worked! I turned the salinity probe on and off, and removed the battery and put it back in, but it stil was unresponsive. Grace had to use the refractometer instead.

img_2108

Figure 3. Placement of seawater on refractometer.

img_7031

Figure 4. Grace using the refractometer.

Here’s the data we collected. The higher pH and temperature values are consistent with what Laura and I saw last Wednesday. Another thing to note is that our salinity readings, usually around 28, were around 25 instead. This makes sense with the heavy rains we’ve been having recently!

img_6616 jpg

Figure 5. Data sheet.

While Grace collected water chemistry data, I poisoned samples with mercuric chloride. Olivia got to work on the system maintenance.

img_7029

Figure 6. Olivia cleaning filters.

After she cleaned filters, the two of us bleached algal lines.

img_7030

Figure 7. Set up to bleach algal lines.

Once Grace finished water chemistry and we ran freshwater through the lines, the two of them put weights and HOBO data loggers back on seawater lines.

img_7036img_7038

Figure 8. Grace and Olivia putting seawater lines back into culture tanks.

Grace and Olivia will be on their own March 29! I wrote a system maintenance protocol that they’ll follow, so there should be no confusion.

In other news, when I came back from Manchester, Laura and I got to look at our histology samples! Check it:

img_7040

Figure 9. Histology sample for one C. gigas gonad specimen.

from the responsible grad student http://ift.tt/2mGKSdq
via IFTTT