observations from academia

I’m going to sum up this post before I’ve barely started it: only pursue a career in academia if your goal in life is to fill out grant applications forms.

Academia, in the UK at least, is going further and further down the line of basing performance of academics on research grant filling stamina. Teaching, and student satisfaction are easily second and third in the list of contributing factors to the success of your academic career which I think is wrong. In actual fact these points are so insignificant in the sliding scale of your career at a university it’s almost worth not even stating they contribute to your career at all.

I’m lucky, I’m only a default academic in that I’m doing a post-doc but have no plans to pursue a career in this beyond the completion of this work, but looking on in from close up it certainly doesn’t look pretty for the future.

So what can we do about this? Honestly I think that the problem has maybe gone to far down the line for any form of redemption.  However, the only people that have any chance of halting this trend are the actual students.

Students hold the power at universities they just don’t use it effectively. Their power arises from the fact that their fees pay for the running of a university. Economics states that if less students apply for specific courses then that department has less money. Student numbers are *key* to an academic department. The fewer students, less money, and less lecturers, it’s as simple as that.

If a department can correlate a reduction of student numbers to bad teaching practices then maybe something can change. Maybe this is all a bit idealistic, I don’t know.

I’m not sure if their are any social networking tools out there at allow students to rate lecturers in such a way – I expect their is. However, looking out from the inside, there use is largely being ignored if they are. Maybe someone can change this. I’d like to think so.

back to the future with c plus plus

Around 6 years ago I was steadily programming away in C++ in Unix using a Sun Workstation and I quickly followed this up with some Windows programming using MFC. During this time I also experimented with PHP creating a small “MVC” style framework long before Rails was even a twinkle in it’s daddy’s eye. Now since then I’ve managed to have quite a varied experience with languages such as Java, C#, and most recently Ruby. However, I now find myself running back 6 years and working with good old C++ again.

So why am I back here? Well to be honest, performance. The stuff that I’m currently working on (Kidney Exchange Problem) requires a fair bit of it – it’s NP-hard optimization problem that that we really do need to find an optimal solution for using an exponential-time algorithm. Anyway, maybe more details on this another day.

With C++ being decidedly “yesteryear” with the cool kids, I was pleased to find that it still seems to have a large user base, a moderately active community, and with Google being heavily invested, it would appear that it ain’t going to die as quickly as people may have though. Despite this, I wasn’t holding out too much hope for finding a wealth of “modern” programming tools, and first on my list was a JUnit/NUnit clone.

After some searching and experimentation (which resulted in some moderate surprise) I came across Google Test – good old Google! In my opinion this is the best C++ unit test framework out there. I did try others (CppUnit, CppUnitLite, and another which I now can’t remember) but Google Test was the easiest to use, has everything I need, and seemed to be the most active in terms of development.

Also, now very old hat, is Boost. Although this has been around for quite some time, back when I was first invested in C++, this just didn’t exist (or if it did it was in its infancy). I just love things like shared_ptr and the Boost Graph Library. I’m sure there will be much much more I will find to use in the coming months in Boost.

Finally, for just now, I also have to put out an honerable mention to Google Mock. I’ve not had a chance to use this much at the moment but, like Google Test, it looks as if it will come in pretty handy.

It’s nice to see that C++ is still living and breath, even if it ain’t so cool any more. However, with the community and C++0x it looks as if it may well just have some sort of future.

does your employer suck?

So you slog your guts out each day programming away, typing faster and faster, writing tests and more test, refactor after refactor, then after five years you think to yourself: “Screw this! I’m doing all this work for Mr Somebody and I’m getting jack shit in return”. Then before you know it you’ve said those inimitable words “I’m going to start my own business”.

Yeah yeah, we’ve all heard it before. The very next day you go back to typing, you end up getting married, you have kids, and you die.

If my business model was that I got a £1 (or $1 – I’m not fussy) for every developer I’ve heard say this then I’d currently be sitting in my winter home in Sydney Australia – rather than cycling my ass down to work every day in sub-zero temperatures. In fact, I’d have contributed much of my wealth from my own contributions.

So why do we do this? I mean, for sure, we developers certainly think we can run a business better than our employers – remember how you tell yourself that every day – so why do so few developers start their own?

To be honest, this is maybe not so much the case in the USA, or in Silicon Valley at least. The Valley seems to bring out the spirit in people to follow through with these ventures, so why can we not harness the same spirit here in the UK?

Over here people frequently talk about it but never do it, and yes I’m one of those people. We need more people doing it here in the UK. It can work as well. Check out Redgate Software, they not only create great products, but they also help other startups. It’s almost like Silicon Valley in Cambridge – OK, I took that one a little too far.

Realistically, to give our economy a brighter future, we need to improve things. I know everyone can’t do it, but everyone wants to work in a cool place that treats their staff well, right? So those who think they have got it stand-up. I’m sure their must be plenty of you?

So the next time you are moaning about your job, spend the time thinking about a product or a service that would be great, get some balls and create your own “thing”. I know I’m going to – well I just made myself look like a fool with this rant if I don’t try and it’s kind of why I done it.

the future is email

In a recent podcast Jeff Attwood and Joel Spolsky were discussing the virtues of email. As a somewhat brief summary of the discussion I think it’s fair to summarise that Jeff hates email. As it happens I’m inclined to go along with him on this one, but possibly for different reasons.

From what I can gather, the problem Jeff and Joel were describing with email is that we end up with an inbox full of emails that never get read or processed, and to allocate the time to process these mails and respond would be a job onto itself. To be honest, I don’t suffer this problem quite as bad, mainly due to the fact that I do not receive in any shape or form the level of email that both of these guys do. As a result, it’s hard for me to appreciate the hate of email for the same reasons. However, I feel there is a far more toxic fallout to the email culture than that which is described.

It has come to my attention recently that many (large) organisations still use email as a way of providing an API to their service. What do I mean by this? Well, in this digital age it is often the case that a web application uses the API provided by another web service to interact with it. For example, if I want to receive a list of the tweets that I’ve made, the twitter API allows me to do this by accessing a given URL, and returns the data in the format of my choosing. Similarly, I can also publish a tweet by posting data to a given URL. This is all rather nice – everything is communicated over good old HTTP allowing easy integration with other services.

However, many firms that are not technologically aware are using email as a mechanism for inter app communication. I don’t want to bash individual companies here but the ones I have personally dealt with are multi-national companies with profits in the billions. Yet despite such profits margins their software systems look like something seen in War Games. To perform any communication with these systems you send an email, and responses are returned to you via the magic of an automated email. Thus, you have to parse the email body and/or parse an email attachment to get the response data. It’s as if Web 2.0 passed like an amoeba in the night without these companies so much as blinking.

You may say that it must work for these companies to be making the profit margin they do. However, as customers, we are paying thousands on development costs to integrate with these systems – in many cases we have no choose in the matter. At the end of the day these companies are slowly falling behind and if they are choosing not to innovate at this level, I personally don’t hold out too much hope for innovation on a wider scale. This kind of strategy no longer works, Google and other such companies have long since blown this way of doing business out the water. No-ones’s saying the death of the non-innovators is going to be quick!

So what is the message I’m trying to get across? Well first, email for this style of inter app communication isn’t really helping anyone. If you are thinking about doing something like this, please think again. Also, on a more general note, can companies that fail to innovate survive in this technology driven society? It’s obviously hard to answer this question with definite authority, but going by gut feeling and history, it doesn’t look good.

idiot’s guide to linux on amazon ec2 – part 2

In Part 1 I covered how to remove the root login, create a new user, and add this user to the list of sudoers on an linux ec2 instance. In this section I will cover how I got Ruby on Rails, MySQL, Nginx and Thin working together on the Ubuntu instance.

First up, I think it’s worth taking a moment to explain what Nginx and Thin actually are, as they are maybe not as well known as the others.

Nginx is a very fast web/proxy server developed by Igor Sysoev. According to wikipedia it currently runs, amongst others, the WordPress and Github websites.

Thin is a ruby web server that “glues together 3 of the best Ruby libraries in web history”[1]:

  1. the Mongrel parser, the root of Mongrel speed and security
  2. Event Machine, a network I/O library with extremely high scalability, performance and stability
  3. Rack, a minimal interface between webservers and Ruby frameworks

Right on to the job at hand and first up was getting apt-get working!

To my surprise (but probably widely known) the Ubuntu ec2 instance did not come with apt pre-configured – unlike yum on a Fedora instance I had previously used. Instead you first have to run apt-get update to download the list of package locations. Now that we’ve done this we can get to work installing the other bit of software required.

MySQL
The first thing we need to install are the MySQL client and server. To do this run the commands:

sudo apt-get install mysql-server
sudo apt-get install mysql-client

Then you need to make sure that the root password for MySQL is set to something secure. This can be done using:

sudo mysqladmin -u root a_good_secure_password

Ruby
Now it’s time to install Ruby on Rails. First we need to install ruby, rake, rubygems, and a couple of other useful packages. The following commands should add the required binaries to your path:

sudo apt-get install rubygems
sudo apt-get install build-essential
sudo apt-get install rake
sudo apt-get install ruby-full

We can now use gem to install rails:

sudo gem install rails

As we will be using MySQL you probably also want to install the MySQL client development library in order to get the ruby gem to build/install correctly. This can be done by running:

sudo apt-get install libmysqlclient15-dev

Obviously the version of the libmysqlclient will depend on the MySQL version that you are using. Finally we can install the mysql gem by running:

sudo gem install mysql

Nginx and Thin
To install the nginx package we run the command:

sudo apt-get install nginx

Nginx then needs to be started so we run:

sudo /etc/init.d/nginx start

By default the package should also add the entries required to restart nginx if the instance is rebooted – you can always check by looking in the /etc/rcX.d directory (where X is the run-level number).

Now it’s time to install thin:

sudo apt-get install thin

Creating application config files for Thin and Nginx
It is a good idea to create config files that can be used to restart your thin clusters. To do this we use the thin config command. Now, let’s assume the app is called myapp and so we run the following command:

sudo thin config -C /etc/thin/myapp.yaml -c ~user/www/myapp --servers 3 -e production

This creates a thin config file /etc/thin/myapp.yaml that starts 3 instances of the rails application found in ~user/www/myapp using the production environment. By default it will start the first server on port 3000 and the next on 3001, and so on. Should you wish to specify the port you can supply it with the -p option, i.e. -p 6666.

You can now start your thin clients using:

sudo /etc/init.d/thin start -C myapp.yaml

It’s worth noting that if you don’t specify the -C option thin will use the config files found in /etc/thin and start the thin clients for each config file found in this directory.

As we want to use nginx as a proxy to our thin client instances we must create a nginx config file for our application. An example of such a config file is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
upstream myapp {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
}
 
server {
    listen   80 default;
    server_name example.co.uk;
 
    access_log /home/user/www/myapp/log/access.log;
    error_log /home/user/www/myapp/log/error.log;
 
    root   /home/user/www/myapp/public/;
    index  index.html;
 
    location / {
        #auth_basic "Please supply login details";
        #auth_basic_user_file /home/user/www/myapp/public/protect.passwd;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
 
        if (-f $request_filename/index.html) {
            rewrite (.*) $1/index.html;
            break;
        }
 
        if (-f $request_filename.html) {
            rewrite (.*) $1.html;
            break;
        }
 
        if (!-f $request_filename) {
            proxy_pass http://myapp;
            break;
        }
    }
}

Lines 1-5 set up the proxy for our thin clients that we started on ports 3000-3002. The values that you include here obviously depend on the number of clients that you started and the ports they are running on. The rest of the file is dedicated to setting up the web server with the majority of settings being pretty self explanatory, so I’ll only highlight the important bits.

First, we see that the server waits for requests on port 80 and the domain used for this site is example.co.uk (lines 8-9). It’s worth noting that hosting a subdomain, say subdomain.example.co.uk, is as easy as replacing example.co.uk in line 9 with subdomain.example.co.uk. Lines 20-23 take care of things like forwarding the real IP address to rails as well as some other set up required for https. Finally the remaining lines in the file check to see if an index.html file is available at the url specified and if so displays displays it (lines 25-28), serve static files straight up (lines 30-33), and finally if the file specified by the url does not exit on the file system it sets headers and proxies for our thin clients and passes it on.

As a side note, lines 18 and 19 that are commented out enable basic http authentication in nginx. You can uncomment out these lines if you require this feature. The password file for http auth can be generated using the apache util htpasswd – you will need to install the package that contains the htpasswd utility.

The config file (let’s call it myapp) is placed in /etc/nginx/sites-available, and finally a sim link is set up between the sites-available directory to the sites-enabled directory to enable the website:

sudo ln -s sites-available/myapp sites-enabled/myapp

That’s it. All we need to do now is restart nginx (/etc/init.d/nginx restart) and assuming your config is ok the site should now be up and running. (If nginx is already running and you want to parse the config without restarting you can always get the pid of the nginx process, ps aux | egrep '(PID|nginx)', and run sudo kill -HUP PID – in fact this is all you actually need to do to get your site up and running)

[1] The Thin homepage – http://code.macournoyer.com/thin/

idiot’s guide to linux on amazon ec2 – part 1

Recently I’ve had the opportunity of setting up a Linux instance on Amazon EC2 for use with Ruby on Rails, MySQL, Nginx and Rabbit MQ. I suspect much of what I will document is obvious to many but hopefully some of you may find it useful, especially, if like me, you are totally inexperienced with setting up a Linux server.

As it turns out I’ll probably document this over a couple of posts as it took up a bit more time and space than I first anticipated. In this first part I will cover, logging in as the root user, adding a new user, generating their ssh key, adding the user to the list of sudoers, and finally disabling root login via ssh. I’ll update this article with links to the other parts as I create them (Part 2).

Right, first things first, some background info. Rightly or wrongly we required the server to do more than one thing, hence the list of items to install. So to reduce this number I picked an image with RabbitMQ pre-installed – as setup of this was uncharted territory for me. A consequence of this choice was that it pushed us down the path of Ubuntu and the latest version which is currently 9.10. So let’s get to it.

The goal here is to disable remote root login, and in doing so we need to create a new user, and give him the ability to sudo commands. To do that we first need to login to our new EC2 image – which took me a little time to figure out! This can be done from Windows using putty. However, we must first use puttygen to generate a putty ssh auth key (putty doesn’t understand the key generated by Amazon) from your Amazon keypair which can be found in the AWS Management Console under Key Pairs. Check out this link for further information.

Now on to the real work.

Adding a user and generating their ssh key
Follow the process below to add a new user and generate an ssh key for this user.

  1. Login as root using method described above
  2. Run adduser webuser – where webuser is the name of the user we are adding. Fill in the details including the password of this user.
  3. Type su webuser – to run a shell as this user without logging out
  4. Execute ssh-keygen -t dsa from this users home directory
  5. Rename the file ~/.ssh/id_dsa.pub to ~/.ssh/authorized_keys
  6. Take a copy of the generated private key (should be in ~/.ssh/id_dsa) and copy it to your local machine
  7. Now use puttygen to generate the ssh key from id_dsa
  8. Finally login using putty and the new key – you should only have to specify your username when logging in.

Adding your new user to the list of sudoers
This is a very basic sudoers setup as we are only adding a single sudo user to the /etc/sudoers file. I know you can do way more complicated things with this but what is documented here was sufficient for our needs. So let’s get on with it.

  1. Login as root
  2. Run visudo – this is an editor for the sudoers file to stop multiple people editing the file at the same time
  3. Locate the lines below in the editor

    # User privilege specification
    root ALL=(ALL) ALL

    and change this to

    # User privilege specification
    root ALL=(ALL) ALL
    webuser ALL=(ALL) ALL

  4. If you would like to allow the user to sudo without having to supply a password then you need to add the following line as well:

    webuser ALL=NOPASSWD: ALL

  5. Now save the file and exit – ensure that the changes are saved to /etc/sudoers

Disabling root login

  1. Login as webuser
  2. Run sudo vi /etc/ssh/sshd_config – you can replace vi with another editor if you please, I’ve heard nano might be a little more friendly to windows users!
  3. Find the line PermitRootLogin and change it to:

    PermitRootLogin no

    If I remember correctly in the instance I was using there was more than one line with PermitRootLogin so it may be worth check for this yourself.

  4. As a side note, should you wish to allow login using passwords rather than using a ssh key (this maybe what users familiar with shared hosting are used to) you can enable this by changing the relevant line in sshd_config to:

    PasswordAuthentication yes

  5. Finally, restart sshd by running sudo /etc/init.d/ssh restart

You should now be able to login in using webuser, and sudo commands as webuser that require to be run as root. Additionally, root login from a remote server has also been disabled.

There may be better ways to do the above, but what I’ve documented works. I may also be missing stuff, if so, let me know and I will update this. Well, that’s it for now. Check back soon for Part 2 which will be on it’s way shortly.

Update: idiot’s guide to linux on amazon ec2 – part 2

academics don’t care and industry doesn’t have the time

I’ve been reading some interviews recently (Coders at Work) and one interviewee (I can’t remember who, but I think it was Fran Allen) suggested that in the last 20 years programming languages have not progressed leaps and bounds as they appeared to in the early days. Are they right?

Personally, I think this might be a good call. How different are the currently programming languages from C? OK, today’s popular duo, Java and C#, have garbage collection and thus we don’t have to deal with those troublesome pointers. Then again, both LISP and Smalltalk have had garbage collection for a long long time. C# has lambda expressions, and other higher-level functions, but then again this stuff was around in LISP since day dot. It’s almost as if the “C-style” languages are battling to catch up with things like LISP – which is a language I’ve never used other than for some emacs hacking. However, LISP has been around since 1959, so why has it taken so long for people to realise that many of its features are incredibly useful? Have we been held back by the fact that most academics don’t really care too much about this kind of stuff, and so don’t push it, and industry is too busy telling us something needs to be done last week?

First, I know that there are academics who do care about this stuff but I don’t think there are enough. And by virtue of being an academic, they are not exposed to many of the problems that are faced by your every-day software developer, and as such, maybe they don’t see the future so well. Thus is it up to industry to innovate at this level? If it is, I don’t see us moving too far forward from our current situation in the next few years. Why?

Well, most small to medium sized software houses are not exactly making money in significantly large quantities to warrant throwing it away on research that some other company making similar software will use to create a similar/better product. So instead of spending money pushing forward the state of the art, you would be insane not to spend that money on making a better product. This combined with crazy schedules, does not leave much in the way of time for forward thinking.

So it appears that we need to rest our hopes on the large enterprises like Microsoft, Google, et al. It’s fair to say that these companies invest a significant amount on money in research, some of which goes toward trying to make programming languages better (F# and Go for example). However taking these two languages as examples we see that F# doesn’t appear to push that many boundaries, and I can’t comment too much on Go, as I’ve not looked at it closely, but it also doesn’t seem to include too many radical switches. Maybe radical does not sit well with shareholders, I dunno.

The thing is, I’m not professing to contain much insight on this either, and I’m not even sure I know what I mean by a radical switch. I just know it doesn’t appear that academia or industry are moving this forward quick enough – if the last 20 years is anything to go by. The only remaining vehicle for change is the programming “community” as a whole, but how much traction we can have is debatable. That said, we do need to move in a different direction, I’m sure of that, and maybe things like multi-core processors may force us kicking and screaming in that new direction. However maybe the problem is even deeper than this, and a switch away from the whole Von Neumann architecture is required. Who knows? Well, I hope someone does!

battling with the back button

Recently, and also a few months back, I had a long drawn battle with the browser back button. There just seems to be something intrinsically broken with the web model using AJAX and that pesky button. In fact, even without AJAX, the back button in combination with form data seems to cause more than enough problems on its own. Consider my latest fight below.

My task was to repopulating a form when the user clicks on the back button. For those not familiar with the problem at hand, when a user clicks on the back button the page displayed is rendered from cache, and no request is made to the server. The actual page displayed when the back button is clicked may be in various states depending on both the browser being used, and the way this page was constructed in the first place, i.e. if it was rendered in stages with AJAX/JavaScript requests or just once when the page was loaded.

A simple, or not so simple, way to repopulate the form is by performing an AJAX call when the DOM loads, and either rendering the part of the page again and sending HTML in the response, or sending the form values back using JSON, etc.

Here I’m going to consider the first of these suggestions, but the problem I faced applies equally to the second (I think!). Let’s suppose that we make a request to the URL http://equivalence.co.uk/Step1/Index and that we are following some kind of wizard where we have “Previous” and “Next” buttons to navigate between the steps. Furthermore, let’s assume that the page rendered by default when accessing Step1/Index contains a form. Now suppose that we click the “Next” button and navigate to our new URL, say http://equivalence.co.uk/Step2/Index, and on this page we decide to hit the back button. As I described previously, on reloading Step1 we make an AJAX call back to the server, and importantly, we send the request to the same URL as the page we are currently on, i.e. Step1/Index.

When receiving this request on the server we check whether it’s an AJAX request or a simple GET then render accordingly. It’s probably reasonable to ask why the hell I was doing this, well if the server received an AJAX request there was no need to render the whole document again as I am only interested in the form element, but on the GET request I wanted to render the whole page.

However, this is where I ran into a little problem. Now suppose that we click the “Next” button again to move to Step2/Index and once again hit the back button. What is displayed?

Well as a surprise to me it was only the form, i.e. the response returned by the AJAX call, not the full page, no HTML head, nothing, just the form. Now that I have seen this behaviour, I totally understand why this is happening; the browser checks for the most up-to-date data sent for this URL, which in my case corresponds the AJAX request when the back button was hit previously. For some reason I just didn’t expect this – incidentally this was the behaviour on Firefox, I’m not sure what happens in the other browsers. As a result, when I was making the AJAX call I had to change it to make the call to “Step1/Index2” instead, now everything worked fine – I think maybe supplying a random get parameter to the original URL may also have worked.

So there we go, that’s it! Exciting post? Maybe not! Hopefully this helps someone though. It’s always something to bear in mind.

PS I’ve ended up with the weirdest problem ever whilst typing this entry in Google Chrome, it’s somehow messed up the keyboard mapping. As an example the top row on a QWERTY keyboard types as “azertyuiop^$” and you have to press shift to get the numbers. Strange.

the mark of sucessful software

Right, so I’ve been thinking about this for a while now. It’s a question that has been contemplated more times than anyone cares to remember, and still we have no good way of achieving the goal that it sets out. A quick search on Amazon will probably yield a thousand books on the subject – each and every one making you think the next is the holy grail. So am I going to tell you the question?? Well I suppose I must.

What is successful software? I’m sure the exact definition of this is subject to opinion and means different things to different people. Therefore all I can do is bore you with the details of what I think makes successful software:

  1. Software that the end user is happy with – a no brainer, if no one wants to buy or use it then you may as well give up. Happiness encompasses how the software looks, the user experience, and the functionality.
  2. Software that is finished to schedule – no one likes to wait, from management to clients, and even developers. Software that is late causes stress on everyone. Scheduling is hard and it has to be realistic. The end result of cramming and cutting corners is poor quality.
  3. Software of high quality – you can finish on time but if it doesn’t do what it needs to do in a manner the user expects, or simply is littered with bugs, then there is not much to be cheerful about. It has been documented ([1]) that bugs caught early can mean 1 hours spent now can save up to 3 to 10 hours later. As a result bugs caught after release cost between 40-50% more. This is always worth keeping in mind.
  4. Software that is cost effective – achieving all three things above but at a cost which prevents profit does not make sense. Costs are important and developers should also be mindful of this – even for internal software.
  5. The developers are happy and proud – I’m sure there are many employers that would be happy if they got software that satisfied the first four criteria but couldn’t give shit how their employees felt or what it took (out) of them. Stressed out employees are no good, they will only leave, and sooner rather than later you will find that you can no longer attract those employees that are smart and get things done. It’s surprising how easy it is to keep people happy without spending a lot of money.
  6. The management are happy and proud – if all the above is achieved at the expense of a stressed out, worn out, nerve shattered manager, then the same problems that arise with unhappy developers can be seen in management with the same negative results.

Now that we have a definition of successful software we can ask the natural question of how to achieve this. As I mentioned earlier, there are more books that I care to read telling me how to do this. Yet I feel none of them can apply in every situation. They all preach some methodology from the latest fad to the decidedly idiotic. If any one of these methodologies actually conclusively worked, there would not be as many books. I’m convinced that the truth lies somewhere in between them all and depends not only on the organisation, but on the people involved. So surely those involved in creating the software may have a better perspective on what will work than the author of a book? (This is not to say you can’t steal ideas that you have read and apply them in a suitable manner.)

I have my opinions on what I think would work for me, and is something that I can probably document in a follow up post. For the moment I will leave this with the statement that don’t be frightened to break rules and invent your own when something is not working. This is the only way we can make our software more successful.

[1] Rapid Development – Steve McConell

sticking with what you know

There comes a time in every programmers life when they have to learn new things and step out the box. Yeah it’s difficult, for sure. It’s all too easy to create the latest application in your software empire, using a language you’ve been developing in for the last 10 years. However, the real problem is thinking this is the only choice. When is it time to abandon this certitude?

First, we cover the forced abandonment. This is when you are pushed kicking and screaming into pastures new, whether you like it or not, i.e. the new job. Here, not only is the new language curve ball thrown (viciously), but you also get whole new set of business rules into the bargain. So what do you do? You program the new language like the old one, only translating the syntax in your head. This is not the best way to learn a language though. Why? Well consider those C programmers trying to program imperatively in Java, Java programmers in JavaScript, C++ programmers in Ruby, and so on. When there is a change in paradigm this mapping strategy just doesn’t work – a similar situation exists with languages that contain a more powerful expression set. It also encourages the behaviour where people learning enough to get the job done, without understanding what is really happening, or that there may have been a better way using “unmappable” language’s features. A better approach would be to write something small, and new, that allows you to explore the language’s features. I’m sure most people can think of something they could write. Furthermore, if you can make it useful to other people, or even your new employer, then everyone’s a winner! This is something I touched on before.

For many people though, this is the only time they will ever consider abandoning. This is sad, and a poor characteristic in a programmer. And to be honest, I just don’t understand it. That’s not to say that I don’t accept that people just do programming as a job, then go home and don’t think about it. However, it’s like most things in life, it’s nice to progress?

As a programmer there will also be other signs that the tide is turning, and you don’t have to be too alert to spot these. Previously I wrote “Perl is Dead, Long Live…Perl?” and being a big Perl fan it was sad to see the language apparently dying, so I know what it’s like. Some signs to look out for may be:

  • the language features are not moving on (Java watch your back) – the people who created it no longer care,
  • the community surrounding the language is dwindling – the people who use it no longer care,
  • there is little in the way of choice when selecting libraries/frameworks – the experts have fled,
  • other programmers have never heard of it – there is no buzz,
  • jobs using it are few and far between – businesses have given up on it, the death kneel.

However, this is all not to say that you give up on your language just because it’s no longer cool – popularity is by no means a great indicator that something will suit your needs. It need not be the case that you give up on your language of choice, instead it could be that you contribute and drag the language forward. But be careful with this one.

Finally, any decent employer will want to see that you are continually developing your skill set – their business needs are continually evolving, so why aren’t you? You are much more likely to land a better job if you contribute to your own education in some way. It looks good and it’s also something to talk about.

So go out and learn something new today, and stop sticking with what you know.