the budget VPS dilemma

My time with the free Amazon EC2 micro instance drew to a close in the last few weeks, that coupled with a few minor instance freezes in the last couple of months got me interested in moving the sites that I run elsewhere. Traffic to my sites is relatively low, maybe a few hundred hits a day max. Given this, a cheap VPS package looked to be exactly what was required.

First up, I done a rough calculation that showed that if I wanted to keep my EC2 micro instance running then if I took out a 3 year reserved instance it was likely to cost me (on average) around £50 per year for a US West instance over the next 3 years and closer to £65 per year for an EU based instance. Both of these are pretty good value for money – and since these sites are essentially a cost sink this is of at least some importance.

However, given the hassle I’d been having with my instance freezing I thought #1 is this going to keep happening and #2 if I take a reserved instance for 3 years then I’m kind of tied down.

In the past I’d been slightly ambivalent about what country the server was hosted in. With Amazon I’d thought screw it, I’ll host it in the US West as it was cheaper, thinking that my traffic tends to be spread throughout the world with maybe a slight leaning towards the US. However lately while creating some backups and trying to copy them locally, I released that this might be OK in terms of people viewing a site but it was a solid pain in the ass when taking full backups off the server to my local drive. So I started looking at VPS providers in the UK.

Surprisingly the cost of having your own VPS has nose-dived – I presume this is due to Amazon driving down prices, cheaper hardware and advances in virtualisation software. After looking around I managed to find a decent deal (~£47 a year with www.burstnet.eu) for a VPS with 512MB RAM, 20GB hard disk, and 1GHz CPU speed on a UK based server – I’ll stress again how much better it is to have servers in your own country when using SSH and copying files.

Now some might be asking why not just go for shared hosting? Well for many this is likely to be a good option. However, when you are running more than 2-3 sites I found that using a VPS is likely to come in at the same price if not cheaper. Plus, I rather enjoy installing the software, setting up the servers, admin of the server, etc. Sure, I may screw it up from time-to-time, but over the years I have learned quite a bit from doing things like this myself. This knowledge has been useful on many occasion. If you write software then you really should know at least something about the platform it runs on – maybe this is old fashioned now I dunno.

Anyway I actually started this post aiming to discuss how I set up WordPress, PHP, nginx, etc on my budget VPS but I’ve gone on way too long already. That’ll have to be a topic for another day. Instead I’ll wrap this up by summerising that:

  1. Amazon may not always be the cheapest – except in the first year where a micro instance is free, and you can’t beat free;
  2. Check www.lowendbox.com to see if there are any nice deals going;
  3. Try to find a server that is geographically close to you if possible;
  4. If you’re a developer, get a Linux VPS, even if it does costs you, call it an investment in your career – OK, you can do this stuff on a local Linux machine but it ain’t the same. It’s good to know how to set up a web server, configure it, and other general server admin tasks. If you ever plan to scale something one day these things will be important. It even gives you a chance to use vi or vim and wonder why the fuck people put themselves though that ;-);
  5. Having your own server also allows you to make mistakes without that stomach churning moment you get when you realise you’ve screwed something up at work – you can now screw up on your own time first;
  6. Running your own server also gives you a better idea on what the performance of your application is. I’m pretty sure it will surprise you. I thought 512MB was a crazy amount of RAM for a few simple sites. It’s amazing how it all adds up when you are running a DB and a web server with several websites on one VPS. However that’s the post I was trying to write when I started this. That’ll be next.

idiot’s guide to linux on amazon ec2 – part 2

In Part 1 I covered how to remove the root login, create a new user, and add this user to the list of sudoers on an linux ec2 instance. In this section I will cover how I got Ruby on Rails, MySQL, Nginx and Thin working together on the Ubuntu instance.

First up, I think it’s worth taking a moment to explain what Nginx and Thin actually are, as they are maybe not as well known as the others.

Nginx is a very fast web/proxy server developed by Igor Sysoev. According to wikipedia it currently runs, amongst others, the WordPress and Github websites.

Thin is a ruby web server that “glues together 3 of the best Ruby libraries in web history”[1]:

  1. the Mongrel parser, the root of Mongrel speed and security
  2. Event Machine, a network I/O library with extremely high scalability, performance and stability
  3. Rack, a minimal interface between webservers and Ruby frameworks

Right on to the job at hand and first up was getting apt-get working!

To my surprise (but probably widely known) the Ubuntu ec2 instance did not come with apt pre-configured – unlike yum on a Fedora instance I had previously used. Instead you first have to run apt-get update to download the list of package locations. Now that we’ve done this we can get to work installing the other bit of software required.

MySQL
The first thing we need to install are the MySQL client and server. To do this run the commands:

sudo apt-get install mysql-server
sudo apt-get install mysql-client

Then you need to make sure that the root password for MySQL is set to something secure. This can be done using:

sudo mysqladmin -u root a_good_secure_password

Ruby
Now it’s time to install Ruby on Rails. First we need to install ruby, rake, rubygems, and a couple of other useful packages. The following commands should add the required binaries to your path:

sudo apt-get install rubygems
sudo apt-get install build-essential
sudo apt-get install rake
sudo apt-get install ruby-full

We can now use gem to install rails:

sudo gem install rails

As we will be using MySQL you probably also want to install the MySQL client development library in order to get the ruby gem to build/install correctly. This can be done by running:

sudo apt-get install libmysqlclient15-dev

Obviously the version of the libmysqlclient will depend on the MySQL version that you are using. Finally we can install the mysql gem by running:

sudo gem install mysql

Nginx and Thin
To install the nginx package we run the command:

sudo apt-get install nginx

Nginx then needs to be started so we run:

sudo /etc/init.d/nginx start

By default the package should also add the entries required to restart nginx if the instance is rebooted – you can always check by looking in the /etc/rcX.d directory (where X is the run-level number).

Now it’s time to install thin:

sudo apt-get install thin

Creating application config files for Thin and Nginx
It is a good idea to create config files that can be used to restart your thin clusters. To do this we use the thin config command. Now, let’s assume the app is called myapp and so we run the following command:

sudo thin config -C /etc/thin/myapp.yaml -c ~user/www/myapp --servers 3 -e production

This creates a thin config file /etc/thin/myapp.yaml that starts 3 instances of the rails application found in ~user/www/myapp using the production environment. By default it will start the first server on port 3000 and the next on 3001, and so on. Should you wish to specify the port you can supply it with the -p option, i.e. -p 6666.

You can now start your thin clients using:

sudo /etc/init.d/thin start -C myapp.yaml

It’s worth noting that if you don’t specify the -C option thin will use the config files found in /etc/thin and start the thin clients for each config file found in this directory.

As we want to use nginx as a proxy to our thin client instances we must create a nginx config file for our application. An example of such a config file is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
upstream myapp {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
}
 
server {
    listen   80 default;
    server_name example.co.uk;
 
    access_log /home/user/www/myapp/log/access.log;
    error_log /home/user/www/myapp/log/error.log;
 
    root   /home/user/www/myapp/public/;
    index  index.html;
 
    location / {
        #auth_basic "Please supply login details";
        #auth_basic_user_file /home/user/www/myapp/public/protect.passwd;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
 
        if (-f $request_filename/index.html) {
            rewrite (.*) $1/index.html;
            break;
        }
 
        if (-f $request_filename.html) {
            rewrite (.*) $1.html;
            break;
        }
 
        if (!-f $request_filename) {
            proxy_pass http://myapp;
            break;
        }
    }
}

Lines 1-5 set up the proxy for our thin clients that we started on ports 3000-3002. The values that you include here obviously depend on the number of clients that you started and the ports they are running on. The rest of the file is dedicated to setting up the web server with the majority of settings being pretty self explanatory, so I’ll only highlight the important bits.

First, we see that the server waits for requests on port 80 and the domain used for this site is example.co.uk (lines 8-9). It’s worth noting that hosting a subdomain, say subdomain.example.co.uk, is as easy as replacing example.co.uk in line 9 with subdomain.example.co.uk. Lines 20-23 take care of things like forwarding the real IP address to rails as well as some other set up required for https. Finally the remaining lines in the file check to see if an index.html file is available at the url specified and if so displays displays it (lines 25-28), serve static files straight up (lines 30-33), and finally if the file specified by the url does not exit on the file system it sets headers and proxies for our thin clients and passes it on.

As a side note, lines 18 and 19 that are commented out enable basic http authentication in nginx. You can uncomment out these lines if you require this feature. The password file for http auth can be generated using the apache util htpasswd – you will need to install the package that contains the htpasswd utility.

The config file (let’s call it myapp) is placed in /etc/nginx/sites-available, and finally a sim link is set up between the sites-available directory to the sites-enabled directory to enable the website:

sudo ln -s sites-available/myapp sites-enabled/myapp

That’s it. All we need to do now is restart nginx (/etc/init.d/nginx restart) and assuming your config is ok the site should now be up and running. (If nginx is already running and you want to parse the config without restarting you can always get the pid of the nginx process, ps aux | egrep '(PID|nginx)', and run sudo kill -HUP PID – in fact this is all you actually need to do to get your site up and running)

[1] The Thin homepage – http://code.macournoyer.com/thin/

idiot’s guide to linux on amazon ec2 – part 1

Recently I’ve had the opportunity of setting up a Linux instance on Amazon EC2 for use with Ruby on Rails, MySQL, Nginx and Rabbit MQ. I suspect much of what I will document is obvious to many but hopefully some of you may find it useful, especially, if like me, you are totally inexperienced with setting up a Linux server.

As it turns out I’ll probably document this over a couple of posts as it took up a bit more time and space than I first anticipated. In this first part I will cover, logging in as the root user, adding a new user, generating their ssh key, adding the user to the list of sudoers, and finally disabling root login via ssh. I’ll update this article with links to the other parts as I create them (Part 2).

Right, first things first, some background info. Rightly or wrongly we required the server to do more than one thing, hence the list of items to install. So to reduce this number I picked an image with RabbitMQ pre-installed – as setup of this was uncharted territory for me. A consequence of this choice was that it pushed us down the path of Ubuntu and the latest version which is currently 9.10. So let’s get to it.

The goal here is to disable remote root login, and in doing so we need to create a new user, and give him the ability to sudo commands. To do that we first need to login to our new EC2 image – which took me a little time to figure out! This can be done from Windows using putty. However, we must first use puttygen to generate a putty ssh auth key (putty doesn’t understand the key generated by Amazon) from your Amazon keypair which can be found in the AWS Management Console under Key Pairs. Check out this link for further information.

Now on to the real work.

Adding a user and generating their ssh key
Follow the process below to add a new user and generate an ssh key for this user.

  1. Login as root using method described above
  2. Run adduser webuser – where webuser is the name of the user we are adding. Fill in the details including the password of this user.
  3. Type su webuser – to run a shell as this user without logging out
  4. Execute ssh-keygen -t dsa from this users home directory
  5. Rename the file ~/.ssh/id_dsa.pub to ~/.ssh/authorized_keys
  6. Take a copy of the generated private key (should be in ~/.ssh/id_dsa) and copy it to your local machine
  7. Now use puttygen to generate the ssh key from id_dsa
  8. Finally login using putty and the new key – you should only have to specify your username when logging in.

Adding your new user to the list of sudoers
This is a very basic sudoers setup as we are only adding a single sudo user to the /etc/sudoers file. I know you can do way more complicated things with this but what is documented here was sufficient for our needs. So let’s get on with it.

  1. Login as root
  2. Run visudo – this is an editor for the sudoers file to stop multiple people editing the file at the same time
  3. Locate the lines below in the editor

    # User privilege specification
    root ALL=(ALL) ALL

    and change this to

    # User privilege specification
    root ALL=(ALL) ALL
    webuser ALL=(ALL) ALL

  4. If you would like to allow the user to sudo without having to supply a password then you need to add the following line as well:

    webuser ALL=NOPASSWD: ALL

  5. Now save the file and exit – ensure that the changes are saved to /etc/sudoers

Disabling root login

  1. Login as webuser
  2. Run sudo vi /etc/ssh/sshd_config – you can replace vi with another editor if you please, I’ve heard nano might be a little more friendly to windows users!
  3. Find the line PermitRootLogin and change it to:

    PermitRootLogin no

    If I remember correctly in the instance I was using there was more than one line with PermitRootLogin so it may be worth check for this yourself.

  4. As a side note, should you wish to allow login using passwords rather than using a ssh key (this maybe what users familiar with shared hosting are used to) you can enable this by changing the relevant line in sshd_config to:

    PasswordAuthentication yes

  5. Finally, restart sshd by running sudo /etc/init.d/ssh restart

You should now be able to login in using webuser, and sudo commands as webuser that require to be run as root. Additionally, root login from a remote server has also been disabled.

There may be better ways to do the above, but what I’ve documented works. I may also be missing stuff, if so, let me know and I will update this. Well, that’s it for now. Check back soon for Part 2 which will be on it’s way shortly.

Update: idiot’s guide to linux on amazon ec2 – part 2