the importance of thoughtful design in web applications

When I think back to around 5 years ago, it was commonplace to use web apps that looked awful, but provided content and services that we couldn’t find elsewhere. In those days it was seen as less desirable to create a website that relied heavily on graphics, and as a result, a “dressed down” website was almost still cool. However, with the rapid increase in bandwidth available to the average user, and the rise in competition between online services and retailers, there is far more to loose having a website whose design sucks.

Now fast-forward to the present day, and we are using the internet more and more as part of our everyday lives. The reality of this is that we programmers can no longer get away with designs that make you cry! This is especially important when a website sells a product or commodities of some form. There is literally nothing that will send me running away from a site faster than poor design. I would rather pay more from a site that has:

  1.  A sensible domain name – this can be hard to find; just how annoying on a scale of 1-10 are those who squat on domain names?
  2. (Up-to-date) contact information, namely: an address; history of the company; a phone number; names and possibly a short biography of the key people running the website – even if you are working in your own in a garage, put your name and tell me who you are.
  3. A well thought out design – it doesn’t have to be fancy, just look as if it was not designed in a day. Basically I just want to know you ain’t going to steal my money.
  4. Copyright dates that are current – this is a small thing, but attention to detail!
  5. Fresh content – a company blog can help here; ideally you want to avoid the web equivalent of the sun bleached items in a corner shop window.
  6. Good spelling and grammar – I’m no expert on this (as you may have noticed) but getting this correct is very important. Spelling in particular.
  7. Secure account login – you may look cheap if you have not coughed out for an SSL certificate; they ain’t that expensive (although being an SSL certificate distributor seems like a licence to print money if you ask me).
  8. The ability to NOT send me a password reminder – I almost cry when I see a site that offers to send me my password via email, as I then know they are storing my password as text in their database somewhere. I can then only imagine what they will do with my credit card details.
  9. A payment system that I don’t have to navigate away from the site to hand over my money, if I don’t want to – some folk are happy with PayPal, I’m still suspicious!
  10. Easy to access help, specifically on: delivery, customer service, and returns.
  11. No links into your site from pop-up ads on external websites.

Focusing on point 3 above in more detail. I understand an innovative website design can cost you a fair bit of money, as graphic designers are not cheap.  As an alternative to expensive design, at least avoid the following:

  1. Pop-ups – a no brainer.
  2. Free hosting – especially hosting supported by ads of any kind.
  3. Poor font choice.
  4. A template based design that several of your competitors are also using – you need to stand out, not blend in.
  5. Writing company biographies in the third-person narrative – LAME.
  6. An entry page with no purpose – it’s rare these days but you still see it.
  7. Using Flash, but not fixing the back button to work as expected.
  8. Input fields that are not long enough to take a typical string, i.e. a credit card box that you can’t fit all the numbers of your card in without scrolling, WTF, how can you double check your number?
  9. Items that are not aligned on the page, but should be – c’mon this doesn’t take long to fix.

The days of amateur design are numbered for those wishing to make a living online.  As more companies make online services available, and give the consumer a choice, then the less likely anyone is going to trust a site that lacks attention to detail.  In my eyes the design does not have to be innovative, it just has to appear safe and trustworthy – obviously eye catching design brings with it attention, that I’m not denying, but it does wear off.  So what key factors do you look for in the design of a website and what is likely to send you running in the opposite direction?

(Update: I was just listening to the boagworld podcast and it happens to mention something that ties in with this quite nicely. So check out the link to find out ways to make your site feel safe).

shattering illusions – is google losing its googly culture?

I have forever viewed Google as the ubiquitous dream maker for the software engineer.  You work on great products (Google Search, Gmail, Google Reader, Google Apps,  Google App Engine, the list goes on), and they appear to treat developers well (20% time, free lunches and drinks, gym membership, and more). However, it seems that not everyone is happy if a recent article (entitled Why Google Employees Quit) on Tech Crunch is anything to go by.

It’s understandable that people who no longer work at Google may have reasons to slate their previous employer.  However, these people did not go out their way to highlight problems at Google, instead Google actually asked them why they left.  I think we can safely say that many of the points raised in this collection of responses contain valid issues.

By far the most common complaint is with regard to the recruitment process.  It apparently takes forever. It obviously never stopped these people from joining, but left a lasting impression.  However, if they could get over this hang-up with the recruitment process, what really is the problem?

First, as much as people like to pretend that it doesn’t matter, money plays its part. This seems especially relevant to the ex-Microsoft employees.  These employees seemed to justify the money drop experienced when moving from Microsoft to Google as being “worth it” to work in a “Googly” culture – where Googly tends to translate into FUN.  It seems strange that Google does not have similar pay scales to Microsoft, as they are direct competitors on this front. This may become more of an issue for Google if it appears that their Googly culture is in recession.

Another point that popped up on more than one occasion was management. This is always an easy shot though let’s face it, but it was reiterated by enough people to take a look. My initial impression is that there exists the typical competitive race for promotions (as in looking good, not necessarily doing good), this always leaves certain people unhappy, myself included, it always brings out the worst in people.  I’m genuinely surprised and sad that this sort of behaviour has seeped its way into Google, and it seems the inevitable outcome of a standard management hierarchy in large corporations. Will we never learn anything from Gore’sSelf-management and the flattened hierarchy“? It seems not.

So, is it too soon to say that Google may be losing its highly valued culture? Such a shift can surely only play into the hands of Microsoft, with no difference in culture, and higher salaries, it seems like a no brainer for the top candidates.

My own personal opinion of Google (as an employer) has diminished somewhat after reading the aforementioned article – whose contents must present itself as a PR nightmare for Google. I have maintained (indirectly) for quite some time now that people will only accept smaller salaries if the environment is FUN to work.  Fun seems harder to maintain though as a company grows, and becomes filled with those with glittering ambitions for their own career. Unfortunately the ethos of working together to obtain mutual reward seems sort of out of place in the new millennium. I suspect even this recession, or a prolonged depression, will not stifle the greed of those that are selfish and do not care.

why would you choose .net?

I’ve been wondering for quite some time now why a new startup, on a greenfield project, or even a personal site, would choose to use .NET for web development?

Firstly, this is not a Microsoft bashing article – I’m simply trying to understand the thought behind such a choice. While researching this I noticed another recent article (titled: When Windows beat Linux: a cautionary tale) on something similar, so I will use some of the information from this article.

For those than can’t be bothered to read the article I linked to (as many people on a certain social network site decided to do before apparently casting their vote) I will summarise. The article looks at a case study by a German airline company who was restructuring the IT systems of a bankrupt airline which they had acquired.  In this process they were moving from a Linux based “scripting” solution to a Windows based .NET stack.

To aid the discussion let’s say that I’m starting a new mISV (micro Independent Software Vendor – this seems to be the buzz word for startup).  What will I be producing? I don’t know, let’s say an online bakery because I LOVE cakes so much.  Now suppose that we choose to do this using Python and Django on a Linux dedicated server.

So let’s assume that we are going to be a rip roaring success and that every business close to us will be looking to buy our cakes to reward their industrious employees, i.e. the application should scale reasonably well.

A quick look at the case study seems to imply that for every Windows based server needed, we require 2.5 times the computing power for equivalent performance in the Linux based system:

4 Windows Server IIS 6.0-based computers replacing the 10 computers that had hosted the former Linux version

A quick check on Google reveals that a Windows dedicated server will cost around £120 (~$170) more per year than a Linux server.  But I need 2.5 times as many Linux servers as I do Windows servers (assuming my application is going to be maxing out the Windows server). Therefore if a Linux server costs me £600 (~£840) a year, this means that I’m £780 better off with the Windows server as my choice.  Hold on though, because, let’s face it, the conclusions drawn about the number of servers required for each solution in the case study are pure bullshit! Right?  The case study thinks it’s fair to say that 4 brand new top of the range servers are equivalent to 4 Pentium P3s from heaven knows when. Common sense here states that I would get at least as much performance out of my Linux server as I would the Windows one. Hence I will be saving £120 a year, not gaining £780 as the case study would like you to believe. Now on to the software.

The case study states that: 

The Web front-end to the e-commerce solution was rewritten using Microsoft C# technology, introducing object-oriented programming to what had formerly been a script-based solution and enabling the solution to be updated and expanded more easily in response to business requirements…The Web server portion of the solution took three months and $120,000 to develop; had SWISS used Java, Heintel estimates, the solution would have taken 50 percent more time and money.

How the hell did Heintel arrive at that estimate?  Christ, every bank in the world would be scrambling to rewrite their enterprise Java apps at that rate.  Let’s face it, their estimate was utter nonsense – my Mum could have came up with a better estimate and she still hasn’t figured out how to use that wee-thing-that-you-move-with-your-hand-to-make-the-wee-arrow-thing-on-the-screen-move!  Moving on, I didn’t realise that you couldn’t write object-oriented programming on a Linux environment, news to me!  I mean it’s not like you could write the exact same object-oriented based solution in a language of your choice, whether it be Python, Java, Ruby or PHP (or even C# using Mono). Therefore instead of saving me money on software it’s going to cost me. Why?

First, I’m going to have to purchase SQL Server, I can’t imagine it’s cheap, say around £800. Not only that, I’m going to have to pay for upgrades that I might need in the future, not to mention more SQL Server licences for any additional db servers should I need them.  The Microsoft stack is certainly not saving me any money here.

Now on to the IDE. For the Linux based system I could use NetBeans or Eclipse, which are free. For the Windows based system I could use Visual Studio Express Edition. However, I can’t imagine the Express Editions are good for building large web applications, and I haven’t seen too many Microsoft shops using these editions –  am I wrong?  Presuming we can’t use the Express Editions, I need to pay for the full version of Visual Studio, which tots in at around £600 per developer. However, the Linux based approach is costing me NOTHING for each additional developer that I add.

Another facet of .NET development I have noticed is that you tend to have to pay for nice developer tools that are otherwise free on non-Microsoft based stacks – Reshaper being the example that springs to mind. Hence you have to factor in the cost of such 3rd party libraries.

All in all it appears that the Windows based system is going to cost me waaaaay more to get started than the Linux equivalent. Costs may not seem that high to some people, but when you have limited financies to start your own mISV, any costs, however small, are something you can do without, more so in the current economic climate. 

One thing I have so far failed to take account of is the cost of learning new technology.  If you are a veteran C# developer then taking the Linux route would mean learning to use new tools and new languages. However, any developer worth their weight in salt is keen to improve their knowledge and would quickly be able to cope.  Most smart developers see it as FUN to learn something new. 

What I would like to mention is Microsofts BizSpark.  This came to my attention after listening to the Startup Success Podcast of which I have become an avid listener.  If memory serves me correctly it drastically reduces the cost of Microsoft development tools for mISVs – sorry I can’t remember the exact price but it’s low (update: please see the comments below for some more info on this). This kind of incentive from Microsoft is a great idea, and something I may look at closely, alongside the Linux based options, in my own mISV ventures I will soon be embarking on.

To sum up: with the exception of the BizSpark incentive, if this truly delivers what it appears to, I just can’t see any reason to choose the .NET stack over a free Linux based solution. Can you?

the future of the humble programmer

Back in ancient times and right up until relatively recently (mid 1400s) scribes were used to copy important documents for kings, queens and other nobility.  It’s hard to imagine that most people couldn’t write back then, and I suspect (but can’t find any hard facts) that many many people still had difficulties writing until the beginning of the 1900s.

However, the job of a scribe became almost redundant overnight with the invention of moveable type. At this point we began to see a power shift, as documents were easily copied and translated, and distributed to the masses.  Obviously there was still the hurdle of learning to read, but now many important documents were available to more than just a handful of people – prior to this reading such documents was limited to the nobility and the clergy.

Moveable type was the most popular form of distributing documents and information right up to the modern day. However, with the advent of computers and the internet, this has changed for good.

Today we find less and less people reading books, but instead we mindfully overdose ourselves on blogs and social networking sites.  This information exchange only serves to benefit each and every one of us, as now we can observe opinions that are not dependent on the views of an editor whom we share little to nothing in common.

It’s easy to see how jobs have transformed over the centuries, once coveted jobs are now in the hands of “amateurs”; who to their credit provide content that is more pertinent to the interested party.  So where does this leave us as software developers?  How will our roles stand up with the future in mind?

If we think back to what I said earlier, about how not too long ago most people couldn’t even write, and consider that the computer was out of reach, both financially and physically, of most people, but now both these things are the norm in society.  So just as everyone learned to write, is everyone going to learn to program?

OK, you may be thinking that a field like mathematics has been around a long long time, and that not everyone is competent in even basic mathematics.  However, let’s face facts, a general programming task is nowhere near a difficult as even high-school level mathematics. That’s not to say there does not exist difficult computing tasks; in fact I’m hoping to convince you of the opposite.

Is it that crazy to think that one day people will program a computer in the same vein as we read and write English (insert your native tongue here)?  I don’t think so. Programming is not really that difficult (doing it well is as difficult as writing beautifully in English, and that has never stoped people writing, look at me). Just as blogs and the internet pulled down the barrier for each and every one of us to write and be heard, I feel it’s only a matter of time before programming computers becomes something more akin to what many will do in everyday life.

If we look closely the first steps are underway. People are using HTML, CSS and JavaScript as if they were everyday things.  They may be doing them under the veil of certain tools, but the “non-programmers” are programming.  They may not even know they are doing it, and the ability to make more complex applications is only going to get easier.

For example, consider writing a database application for an online bicycle shop using CakePHP (something I have experience of, hence the inclusion). You have to know almost nothing about using a database to create this application. OK, you may say that scaling and optimising these things takes a “professional”, but at the rate we are pushing the technology, this barrier may not be there on 5-10 years time – consider the cloud computing environments as a step in this direction.

What I’m not saying here is that all computing is easy. There are still problems that are difficult to solve and require much much more than simply following a fixed set of instructions. Indeed, this is the domain where us developers must start focusing.

There may be many readers out there that think this is all nonsense and that programming computers is always going to be an elite occupation. Just tell that to the scribes, journalists and the media presenters/organisations, whose occupations have either vanished or are suffering severe contraction. Many of these occupations never seen it coming (or refused to see it coming) and done nothing –  but just remember how quick this actually happened to each group. Do you want to be in the same position?

why is version control so hard?

I was listening to the latest stack overflow podcast where the discussion floated round to version control.  There seemed to be a general consensus that version control is hard. This seems like nonsense, but in actual fact it’s completely true.

My first experience of version control was with SourceSafe – I had met CVS briefly at uni but at the time couldn’t see the point, and without the need to understand it for an exam, there seemed little point in pursuing it further. Even having no experience of version control prior to SourceSafe, I still knew it sucked.  Why? Well the fact that I couldn’t edit a file that someone else had already checked out seemed absurd – especially as people had the tendency to leave files checked out when they went away on holiday.  Creating branches also seemed like a kludge, the result being no-one used them.  However, it was pretty easy to understand so Microsoft got something right!

After this I moved to an organisation that used ClearCase.  Oh boy this was hard.  I just didn’t get it. For about the first 2 months I had to get a friend to edit my config specs to pull in the correct files!  I can’t explain to you how much I just didn’t get it.  It was like black magic. Then one day it just clicked (akin to the day you finally get pointers). If I look at a config spec now (see below for a simple example) I just don’t understand why I didn’t get what was going on!

# First select any files that are checked out to this view
element * CHECKEDOUT

# Pull in the latest version of any files that are on the branch GREGG_BUGFIX
element * .../GREGG_BUGFIX/LATEST

# Get everything else for the latest off the mainline
element * /main/LATEST

I like the idea of the config spec a lot, and I actually think it is a pretty good system for pulling in different files from different branches.  Subversion is nowhere near as nice as this.  The problem, however, is that you need a file system (or workspace of some sorts) that is aware of how to interpret the details in the config spec to make this work, and I think this is ClearCase’s ultimate downfall; it’s just too costly to setup and maintain, you essentially need an admin working on it alone, this is ignoring the fact that it costs an fortune to buy.

Next I moved onto Subversion.  The philosophy behind Subversion is not too dissimilar to that of ClearCase, so it was a relatively easy step.  However, managing a Subversion repository is substantially more user friendly than managing ClearCase views and VOBS. Although I really hate the idea of automated merging, which everyone tended to avoid like the plague when using ClearCase, it seems impossible to escape with Subversion?

So why is ClearCase/Subversion so difficult to understand?  I’m not even sure now, as it all just seems natural, from creating a view, to writing a config spec, checking out files, branching then merging in the changes. Do we basically accept that it’s one of these things that take time to understand? The problem we have is that many people either give up or try to use each system like SourceSafe – I have seen this happen.  Furthermore, I have also worked with, and heard about many others, that just do not get concurrent version control and branches, and point blank refuse to use it.  I wish I could come up with a golden piece of information that switches on how it all works, but I can’t seem to put my finger on it.

What I tend to suggest to people is experimenting with the different features of a version control system; primarily branching and merging, as this is what tends to catch people out. Experience seems to indicate a fear of doing things wrong, resulting in lost work, is a particular issue.  However, to combat this (with Subversion), I suggest making a copy of your working folder (including the .svn directory) prior to experimenting.  Now you can perform an SVN update or merge without the fear of “losing” anything you have done (you simply copy the contents of the folder back and start again if need be). This may bad practice, I dunno, but it’s a policy that saved me a few times during a complex merge.

Moving towards the future it would appear that distributed version control systems are going to be the next big thing.  I have a little experience using Mercurial, but I kind of feel a bit like I did when I first started using ClearCase – out my depth. I do however love the idea of checking in constantly, without it being a public operation.  It also seems to me that it is a very nice model to use for having your live website under version control.  However, it’s all still pretty new to me, and I admit to being a little scared of it.  All the same, I’m following my own advice and just getting stuck into using it, even if I do make a mistake or two, as it seems like the only way to learn.

some thoughts on pair programming

In the last couple of days I have had a chance to do a little bit of pair programming.  In the past I have found myself being a touch sceptical of it all, but predictably I hadn’t actually done much.  So here are some thoughts from my recent experience.

Firstly, I must admit it was only half pair programming, in the sense that I was only an observer and not a driver.

I would say that my biggest concern with pair programming is I found it hard to believe that there would not be a drop in overall productivity with two people concentrating on the same task.  However, I’m beginning to think that, counter intuitively, this may not be the case, but it also depends on the task.

For example, when two people have a completely different skill sets and both skill sets are required for the job.  In my case, I knew literally nothing about the programming language we were developing in.  However, I had a fair amount of knowledge with regards to the “API” we were coding to (fair amount meaning I wrote it!).  As a result of this I knew what we were trying to achieve but just not how to achieve it.  In this situation the pair programming seem to go very well and I would suspect that the functionality was added to the code twice as fast as it would have been if any one of us had to do it alone. 

So what are the other benefits?  You generally always learn something when pairing up –   even if you have more experience than the other person.  For me, I learned that you could place a hash (#) in front of the number to specify the ascii value for a character, for example, #9 is a tab character.  This had passed me by for some reason, I mean I knew in HTML you could do stuff like &#33 to obtain an !, but I had just never thought about what it meant.  Now I understand!  You also get the benefits of seeing how someone else thinks, and you can learn a lot from that.  You might also gain “tips” for navigating the environment or valuable tools being used.

There are some downsides too.  When I was doing it the last couple of days I never noticed too many, but I can certainly imagine thinks like different personalities, strong opinions, and reluctance to participate, affecting things quite drastically. Here are a few that I think may annoy the driver, but as I wasn’t driving I don’t really know for sure.

First, as a navigator you sometimes notice tiny things like misspelt names, missed semi-colons, etc, and at the start I pointed these out pretty quickly.  However, I then thought to myself, maybe the person has noticed these things and was going to go back and change them when they finished typing the line.  So I just tried to stop myself interrupting too much – as I suspect someone who constantly points things out can get pretty annoying.  I also had to stop myself preaching my coding habits that don’t make a difference to functionality, that was harder though, as I pretty much have an opinion on just about everything 🙂 .  As for navigating, I’m sure someone that completely ignored your suggestions and coded away regardless would be extremely annoying.

I’m not sure that pair programming is THE definitive development strategy.  I’m certain that someone sitting over you while you are writing some form of input parser in which you are just running through the motions is almost pointless.  Also, when you have to think really hard about something, whether it be debugging a hard problem or some complex algorithm, someone interjecting all the time would be extremely off-putting.

So to sum up: pair programming definitely has advantages and with some careful planning it can be used to improve a projects quality, cost and time to launch.

does not being aware of np-completeness make you dumb?

I have recently been reading a series of posts ([1][2][3]) from authors about the pros and cons of having or not having a degree in CS.  I’m sure the argument goes both ways, but my thoughts are focused on how does someone who has not spent 4 years obtaining a degree, accrue the knowledge they may need.

For the purpose of this post I will use the concept of NP-completeness (or in fact algorithmics in general) as an example.  I think from experience I have learned that NP-completeness is a topic that quite a few developers are not familiar with despite its importance in the field.  Many may be aware of it through Jeff Atwood‘s post over at codinghorror, where he suffered at the hands of the topic’s rigour.

So how does someone who has never obtained a CS degree find out about such a topic?  My feeling is that they don’t.  OK some people will go out their way to find such topics but I haven’t met too many that do.  Some may even point out that NP-completeness did (does) not appear on their course curriculum, in which case it probably boils down to Dan Dyer‘s point about the virtues of choosing a good university course.  What I’m trying to say is that someone who goes to a decent university and obtains a good degree has spent (possibly) 4 years learning about the foundations of computing, giving them a good base to make important technical decisions.

For example.  You are hit with a problem at work, let’s say some sort of job scheduling problem or even allocation of people to resources.  The problem seems easy when described and the agile developer takes over you and you start churning out code.  However, if this person had only taken a simple algorithms course at university, he might have thought twice about jumping into such a problem – as many variants of such problems are known to be NP-hard.

This brings me back round to my question, where do such developers learn about these things?  They may read articles on sites like dZone or reddit, but c’mon, let’s face it, these sites are pretty good at giving you the latest articles about Java, .NET, jQuery, but it’s rare that you see an article about algorithmic issues rise to the top of the popularity list.  I mean who wants to read the details of what a B+ Tree, Trie, Boyer-Moore string matching algorithm, etc are, because it’s hard going, instead most will take the easy option of reading guff like 7 ways to write beautiful code.  However, if you attend university you are forced to know many of the finer details of CS, as you are examined on them. OK, you may not need them for a simple web app, but an employer is certainly going to be happier that you have the details in your head should you need them.  The point is that you never know when you are going to need such things, and knowing when you do is maybe what you spend 4 years gaining.

naïve algorithms will let you down

Over the last few years I have been in a position where I have been party to discussions about software with those who know little to nothing about the art of computer science.  Many of these people were in charge of very large governmental budgets for some fairly high-profile public-facing software systems.  These systems typically involved a large amount of deep algorithmic work, which on the face of it might seem simple, but was actually very complicated.  Given this situation you would have thought people would listen when advice was being given out.  Oh how wrong you would be!

For some context to this I will introduce a problem, which is not strictly related to the situation described above (to protect the guilty), but has a similar flavour.

The problem I’m going to describe is that of allocating a set of school children to their (or I suppose their parents) most desirable school.  Each parent is asked to rank in strict order of preference 5 schools.  A centralised system is then used to assign each child to a school.

The naïve (serial-dictatorship) solution to this problem would be to simply assign each child to the first school on their list that is not already full.  So what is the problem with such a solution?  Well, firstly we can end up with an assignment of children to schools that does not necessarily have maximum cardinality, i.e. there exists another allocation in which a larger number of children are matched to schools.  Also, we may end up with an allocation in which a child c1 ends up at his fifth placed school while another child c2 obtains his first-choice, but if c2 moved to his second-choice then c1 would be able to get his third-choice.  If we allow this switch to take place we clearly end up with a “better” allocation.

Despite the poor characteristics obtained from the naïve algorithm, I suspect that many many organisations, both governmental and private, use such an algorithm when assigning people to resources when preference information is available.   The problem is that people in charge of building such systems often only see the need to allocate these people to resources in whatever way possible and in the shortest time.  So what happens?  Well they get a system that looks flashy and kind of works.  However a couple of months in when they have a group of people complaining and operating outside the system, the reality finally dawns on them.  I’m sure in the UK alone we could count quite a few computer systems that have fallen under the weight of this pressure.

So we have shown that the naïve algorithm does not always work.  Can we safely assume that the heart of a software product lies in those complicated algorithms at its core?  I think we can say so with Google, but who knows in general.  In the example I described I think this is definitely the case.  So what would be the “best” way to allocate these children to schools?

There are different meanings of “best” but all would ensure that the allocation has maximum cardinality, without this we are at a lost cause straight away. Following this we could attempt to maximise the number of children assigned to their first-choice school, then subject to this we maximise the number of children assigned to their second choice school, etc.  Therefore if we receive an angry parent complaining that a child got their third-choice school you can at least tell them that there was no other possible assignment in which some other child with their first or second place does not get worse off.  Hence there is no incentive for the administrator to allow any changes to take place.  This is known as a greedy maximum matching (allocation).  Lastly we need to make sure that these algorithms are efficient, most algorithms are no use to anybody if they are exponential.

So there we go, just remember that naïve algorithms are not always (and often not) the best.