iterative development

We’ve all had it drilled down our throats that development should be completed in short-sharp iterations. “Release early and release often” – no agile developer gets up in the morning without repeating this mantra to themselves before they reach the sink. However, surely it’s important to realise the difference between “Release early and release often” and “Release utter shit early and release utter shit often”.

Who really wants to use an application/toolkit/library that has so many bugs in it that the user experience is dreadful? Surely users would rather wait a few more weeks for something that is, in a sense, polished? I mean we’re not talking Microsoft Vista style release cycles here, but quality control consisting of a little more than running a set of unit tests would be nice.

So what has prompted this minor rant? Well Firebug unfortunately. I hate to criticise it because it’s free but the bugs I find in it are getting worse and worse. The most recent one being that the continue button no longer works in the latest release (1.4.0b10). Sigh. However, open source or not, my feeling is that if you choose to do something then do it properly or not at all.

Anyway. Regardless of the product, it surely makes more sense to sacrifice your release schedule a little for some real testing and bug fixing? For commercial products a bad user experience is going to make a sale difficult both now and in the future. Being first to market is one thing, but it’s far from being the only (or even main) contributing factor to a products success – Google and Facebook being good examples of this.

So is it possible just to have a little common sense about all this and maybe restate the mantra as “Release quality early, release quality often”. It can’t be that hard, right?

the real geek test

I’m sorry, I’m sorry, but I object to the use of the word geek. I’d always imagined that a geek was someone who was obsessive about computer programming. Lately though I’ve seen people use it in a way that is beginning to annoying me – it’s used to describe someone who simply uses the latest and greatest software or gadgets. This was brought to a head when I clicked a link in a tweet (thanks Colin!) which sent me to an article called “Top 10 Ways to Provoke a Geek Argument“.

I mean, I like to think I’m a geek, sad, but true. However, if the things mentioned in that article are likely to incite an argument in a geek, then I ain’t a geek. True geeks laughs in the face of such insults. So bearing all this in mind I say a real geek is someone who takes offense when:

  1. someone states that programming is just a day job;
  2. someone tells you they work in IT/computing and they can’t program – this often comes in the form of a third-person telling you their friend works in IT just like you!
  3. someone tells you that the technology used doesn’t matter;
  4. someone’s role call of programming languages does not extend to at least 4 and they don’t know at least one dynamic language – real brownie points go to those who know a functional language;
  5. someone shifts a conversation about programming to something else – oh yes, including girls, well maybe not!
  6. a developer tells you they don’t have broadband at home;
  7. someone states that Agile development is essentially the same as the Waterfall model;
  8. someone becomes a “developer” during a one year Masters conversion course – in particular sociology and arts students;
  9. a “developer” tells you they want to be a manager;
  10. people don’t think they’re a geek.
  11. There you go, if you don’t agree with these then you ain’t a geek! However, you are at least one tenth of one! ūüėÄ

developing software nobody wants

Developing software that nobody wants is a classic mistake.  Joel Spolsky once said that if he had listened to what his customers wanted, and created a product with only their suggestions, they would not have came up with as many nice features (I paraphrase from an episode of the stackoverflow podcast). However, maybe his logic is slightly skewed. Why? Well he is developing a product for software developers (FogBugz). Therefore, his target market is essentially the same people who are writing the software. This makes developers a great source for feature suggestions.

However, consider on the other hand the average software developer. They are not creating a product that is used by other software developers. Instead, customers can range from a highly technical audience (say Matlab), to a secretary with little computing knowledge. Clearly if we are creating a product aimed at the latter market, a software developer is the last person you should be asking for feature advice.

Unfortunately, misaligned product requirements actually happen in the wild. I have been involved a project that forced a user into making around 10 clicks to achieve a particular goal. Each step was valid, and allowed the user fine grained control over a feature. However, the user didn’t need this level of control, and they simply selected the same options every time. There was great difficulty convincing most of the developers that this kind of fine grained control was not required. Thus:¬†developing software without thinking about how a customer is going to use the software should be high on the list of UNFORGIVABLE sins.

Some problems, like the one described above, are hard to grasp for us programmers. As programmers we are taught to generalise, and I think we find it difficult to turn this off in our heads. This can be seen in many applications that drown us in XML configuration. Product managers, and CEOs have less room for excuses.

So how do we ensure that we are not developing software that nobody wants? It just seems obvious to me that you have to ask the people that will be buying your software. This seems so simple right? Can you even believe that it doesn’t happen? Is it beyond the realms of possibilities that somebody, somewhere, is developing a piece of software for a market without asking that market the features that they want? You’d better believe it. It happens all the time.

is it ok to not ask questions?

First, excuse the inherent contradiction in the title! Now, with the recent rise of stackoverflow.com as a programmers favourite dynamic and responsive encyclopedia, I find myself questioning my own behaviour.

Why? Well, I find that I almost NEVER ask any problem-based programming related questions to my peers.

Don’t misunderstand me here, the amount of stuff I don’t know is literally unbounded, it’s just that I will always work away at a problem until I have found the solution myself. I’m pretty sure that I didn’t always do this – I can vividly remember asking a friend with whom I worked with years ago many many programming related questions. Thinking back to those times though, it almost felt like I was cheating asking so many questions.

However, in the last 5 years, much of the development work I have undertaken has been as a contractor. In this setting I literally had no-one to ask. So what did I do? Well I just spent the time researching the problem, figuring out details, and then tried to use what I had learned to solve the problem – where the range of problems encountered ranged from the trivial to utterly complex.

This habit of not asking questions has continued with me. Even on a recent project I encountered several issues that literally had me stumped. ¬†Yet with a lot of effort and hard work, I got something that worked in the end. ¬†This, maybe wrongly, leads me to believe that those who ask a lot of questions are just not willing to put this kind of effort in – I suppose it’s that or they are just not capable! Or maybe they just prefer to economise their effort! Myself, I just think it seemed much easier to ask, and so that’s what I did. But that really just boils down to laziness. Right?

What can we tell from those who ask a lot of questions and those who ask none?

Despite my previous remarks, I think it’s easy to see why asking questions is a good thing: it saves time, builds communities, and most importantly it allows you to benefit from other people’s experience. That said, I feel that a developer who has the ability to free think, and “discover” a solution is extremely valuable – you may not always be in a position to find an answer to your burning question, especially if it is highly tied to the business logic of your application. I’m sure many of those people who ask the most trivial questions would benefit from the process of finding their answer, rather than being given it on a plate at the likes of stackoverflow. Is the success of stackoverflow the first sign of the dumbing down of programmers or is it the start of collective knowledge transfer that can only benefit our whole field?

The link between free thinkers and those who do not ask questions is not an if and only if relationship – anyone with any experience of tutoring students will know this. Many people who do not ask questions actually don’t have a clue (and maybe never will). ¬†You see this all the time with certain students; they never say they have a problem and they don’t ask questions. Then you see their exam results and think WTF.

This leaves me with a problem though: I don’t want to say that all free thinkers don’t ask questions, and I know those who do not ask questions are not all free thinkers. How do you figure out what is what? Mmmm, maybe I will just leave that as a question ūüėÄ

shattering illusions – is google losing its googly culture?

I have forever viewed Google as the ubiquitous dream maker for the software engineer.  You work on great products (Google Search, Gmail, Google Reader, Google Apps,  Google App Engine, the list goes on), and they appear to treat developers well (20% time, free lunches and drinks, gym membership, and more). However, it seems that not everyone is happy if a recent article (entitled Why Google Employees Quit) on Tech Crunch is anything to go by.

It’s understandable that people who no longer work at Google may have reasons to slate their previous employer. ¬†However, these people did not go out their way to highlight problems at Google, instead Google actually asked them why they left. ¬†I think we can safely say that many of the points raised in this collection of responses contain valid issues.

By far the most common complaint is with regard to the recruitment process.  It apparently takes forever. It obviously never stopped these people from joining, but left a lasting impression.  However, if they could get over this hang-up with the recruitment process, what really is the problem?

First, as much as people like to pretend that it doesn’t matter, money plays its part. This seems especially relevant to the ex-Microsoft employees. ¬†These employees seemed to¬†justify¬†the money drop experienced when moving from Microsoft to Google as being “worth it” to work in a “Googly” culture – where Googly tends to translate into FUN. ¬†It seems strange that Google does not have similar pay scales to Microsoft, as they are direct competitors on this front. This may become more of an issue for Google if it appears that their Googly culture is in recession.

Another point that popped up on more than one occasion was management. This is always an easy shot though let’s face it, but it was reiterated by enough people to take a look. My initial impression is that there exists the typical competitive race for promotions (as in looking good, not¬†necessarily¬†doing good), this always leaves certain people unhappy, myself included, it always brings out the worst in people. ¬†I’m genuinely surprised and sad that this sort of¬†behaviour¬†has seeped its way into Google, and it seems the¬†inevitable¬†outcome of a standard management hierarchy in large corporations. Will we never learn anything from Gore’sSelf-management and the flattened hierarchy“? It seems not.

So, is it too soon to say that Google may be losing its highly valued culture? Such a shift can surely only play into the hands of Microsoft, with no difference in culture, and higher salaries, it seems like a no brainer for the top candidates.

My own personal opinion of Google (as an employer) has diminished somewhat after reading the aforementioned article Рwhose contents must present itself as a PR nightmare for Google. I have maintained (indirectly) for quite some time now that people will only accept smaller salaries if the environment is FUN to work.  Fun seems harder to maintain though as a company grows, and becomes filled with those with glittering ambitions for their own career. Unfortunately the ethos of working together to obtain mutual reward seems sort of out of place in the new millennium. I suspect even this recession, or a prolonged depression, will not stifle the greed of those that are selfish and do not care.

the future of the humble programmer

Back in ancient times and right up until relatively¬†recently¬†(mid 1400s)¬†scribes¬†were used to copy important documents for kings, queens and other nobility. ¬†It’s hard to imagine that most people couldn’t write back then, and I suspect (but can’t find any hard facts) that many many people still had difficulties writing until the beginning of the 1900s.

However, the job of a scribe became almost redundant overnight with the invention of moveable type. At this point we began to see a power shift, as documents were easily copied and translated, and distributed to the masses.  Obviously there was still the hurdle of learning to read, but now many important documents were available to more than just a handful of people Рprior to this reading such documents was limited to the nobility and the clergy.

Moveable type was the most popular form of distributing documents and information right up to the modern day. However, with the advent of computers and the internet, this has changed for good.

Today we find less and less people reading books, but instead we mindfully overdose ourselves on blogs and social networking sites.  This information exchange only serves to benefit each and every one of us, as now we can observe opinions that are not dependent on the views of an editor whom we share little to nothing in common.

It’s easy to see how jobs have transformed over the centuries, once coveted jobs are now in the hands of “amateurs”; who to their credit provide content that is more¬†pertinent¬†to the interested party. ¬†So where does this leave us as software developers? ¬†How will our roles stand up with the future in mind?

If we think back to what I said earlier, about how not too long ago most people couldn’t even write, and consider that the computer was out of reach, both financially and physically, of most people, but now both these things are the norm in society. ¬†So just as everyone learned to write, is everyone going to learn to program?

OK, you may be thinking that a field like mathematics has been around a long long time, and that not everyone is competent in even basic mathematics. ¬†However, let’s face facts, a general programming task is nowhere near a difficult as even high-school level mathematics. That’s not to say there does not exist¬†difficult computing tasks;¬†in fact I’m hoping to¬†convince¬†you of the opposite.

Is it that crazy to think that one day people will program a computer in the same vein as we read and write English (insert your native¬†tongue¬†here)? ¬†I don’t think so. Programming is not really that difficult (doing it well is as difficult as writing beautifully in English, and that has never stoped people writing, look at me). Just as blogs and the internet pulled down the barrier for each and every one of us to write and be heard, I feel it’s only a matter of time before programming computers becomes something more akin to what many will do in everyday life.

If we look closely the first steps are underway. People are using HTML, CSS and JavaScript as if they were everyday things. ¬†They may be doing them under the veil of certain tools, but the “non-programmers” are programming. ¬†They may not even know they are doing it, and the ability to make more complex applications is only going to get easier.

For example, consider writing a database application for an online bicycle shop using CakePHP (something I have experience of, hence the inclusion). You have to know almost nothing about using a database to create this application. OK, you may say that scaling and optimising these things takes a “professional”, but at the rate we are pushing the technology, this barrier may not be there on 5-10 years time – consider the cloud computing environments as a step in this direction.

What I’m not saying here is that all computing is easy.¬†There are still problems that are difficult to solve and require much much more than simply following a fixed set of instructions. Indeed, this is the domain where us developers must start focusing.

There may be many readers out there that think this is all nonsense and that programming computers is always going to be an elite occupation. Just tell that to the scribes, journalists and the media presenters/organisations, whose occupations have either vanished or are suffering severe contraction. Many of these occupations never seen it coming (or refused to see it coming) and done nothing Р but just remember how quick this actually happened to each group. Do you want to be in the same position?

some thoughts on pair programming

In the last couple of days I have had a chance to do a little bit of pair programming. ¬†In the past I have found myself being a touch sceptical of it all, but predictably I hadn’t actually done much. ¬†So here are some thoughts from my recent experience.

Firstly, I must admit it was only half pair programming, in the sense that I was only an observer and not a driver.

I would say that my biggest concern with pair programming is I found it hard to believe that there would not be a drop in overall productivity with two people concentrating on the same task. ¬†However, I’m beginning to think that, counter intuitively, this may not be the case, but it also depends on the task.

For example, when two people have a completely different skill sets and both skill sets are required for the job. ¬†In my case, I knew literally nothing about the programming language we were developing in. ¬†However, I had a fair amount of knowledge with regards to the ‚ÄúAPI‚ÄĚ we were coding to (fair amount meaning I wrote it!). ¬†As a result of this I knew what we were trying to achieve but just not how to achieve it. ¬†In this situation the pair programming seem to go very well and I would suspect that the functionality was added to the code twice as fast as it would have been if any one of us had to do it alone.¬†

So what are the other benefits? ¬†You generally always learn something when pairing up – ¬† even if you have more experience than the other person. ¬†For me, I learned that you could place a hash (#) in front of the number to specify the ascii value for a character, for example, #9 is a tab character. ¬†This had passed me by for some reason, I mean I knew in HTML you could do stuff like &#33 to obtain an !, but I had just never thought about what it meant. ¬†Now I understand! ¬†You also get the benefits of seeing how someone else thinks, and you can learn a lot from that. ¬†You might also gain “tips” for navigating the environment or valuable tools being used.

There are some downsides too. ¬†When I was doing it the last couple of days I never noticed too many, but I can certainly imagine thinks like different personalities, strong opinions, and reluctance to participate, affecting things quite drastically. Here are a few that I think may annoy the driver, but as I wasn’t driving I don’t really know for sure.

First, as a navigator you sometimes notice tiny things like misspelt names, missed semi-colons, etc, and at the start I pointed these out pretty quickly. ¬†However, I then thought to myself, maybe the person has noticed these things and was going to go back and change them when they finished typing the line. ¬†So I just tried to stop myself interrupting too much – as I suspect someone who constantly points things out can get pretty annoying. ¬†I also had to stop myself preaching my coding habits that don’t make a difference to functionality, that was harder though, as I pretty much have an opinion on just about everything ūüôā . ¬†As for navigating, I’m sure someone that completely ignored your suggestions and coded away regardless would be¬†extremely¬†annoying.

I’m not sure that pair programming is THE definitive development strategy. ¬†I’m certain that someone sitting over you while you are writing some form of input parser in which you are just running through the motions is almost pointless. ¬†Also, when you have to think really hard about something, whether it be debugging a hard problem or some complex algorithm, someone interjecting all the time would be extremely off-putting.

So to sum up: pair programming definitely has advantages and with some careful planning it can be used to improve a projects quality, cost and time to launch.

does not being aware of np-completeness make you dumb?

I have recently been reading a series of posts ([1][2][3]) from authors about the pros and cons of having or not having a degree in CS. ¬†I’m sure the argument goes both ways, but my thoughts are focused on how does someone who has not spent 4 years obtaining a degree,¬†accrue¬†the knowledge they may need.

For the purpose of this post I will use the concept of NP-completeness¬†(or in fact algorithmics in general) as an example.¬†¬†I think from¬†experience¬†I have learned that NP-completeness is a topic that quite a few developers are not familiar with despite its importance in the field. ¬†Many may be aware of it through Jeff Atwood‘s post over at codinghorror, where he suffered at the hands of the topic’s rigour.

So how does someone who has never obtained a CS degree find out about such a topic? ¬†My feeling is that they don’t. ¬†OK some people will go out their way to find such topics but I haven’t met too many that do. ¬†Some may even point out that NP-completeness did (does) not appear on their course curriculum, in which case it probably boils down to Dan Dyer‘s point about the virtues of choosing a good university course. ¬†What I’m trying to say is that someone who goes to a decent university and obtains a good degree has spent (possibly) 4 years learning about the foundations of computing, giving them a good base to make important technical decisions.

For example. ¬†You are hit with a problem at work, let’s say some sort of job scheduling problem or even¬†allocation¬†of people to resources. ¬†The problem seems easy when described and the agile developer takes over you and you start churning out code. ¬†However, if this person had only taken a simple algorithms course at university, he might have thought twice about jumping into such a problem – as many variants of such problems are known to be NP-hard.

This brings me back round to my question, where do such developers learn about these things? ¬†They may read articles on sites like dZone or reddit, but c’mon, let’s face it, these sites are pretty good at giving you the latest articles about Java, .NET, jQuery, but it’s rare that you see an article about algorithmic issues rise to the top of the popularity list. ¬†I mean who wants to read the details of what a B+ Tree, Trie, Boyer-Moore string matching algorithm, etc are, because it’s hard going, instead most will take the easy option of reading guff like¬†7 ways to write beautiful code. ¬†However, if you attend university you are forced to know many of the finer details of CS, as you are examined on them. OK, you may not need them for a simple web app, but an employer is certainly going to be happier that you have the details in your head should you need them. ¬†The point is that you never know when you are going to need such things, and knowing when you do is maybe what you spend 4 years gaining.

na√Įve algorithms will let you down

Over the last few years I have been in a position where I have been party to discussions about software with those who know little to nothing about the art of computer science.  Many of these people were in charge of very large governmental budgets for some fairly high-profile public-facing software systems.  These systems typically involved a large amount of deep algorithmic work, which on the face of it might seem simple, but was actually very complicated.  Given this situation you would have thought people would listen when advice was being given out.  Oh how wrong you would be!

For some context to this I will introduce a problem, which is not strictly related to the situation described above (to protect the guilty), but has a similar flavour.

The problem I’m going to describe is that of allocating a set of school children to their (or I suppose their parents) most desirable school. ¬†Each parent is asked to rank in strict order of¬†preference¬†5 schools. ¬†A centralised system is then used to assign each child to a school.

The na√Įve (serial-dictatorship) solution to this problem would be to simply assign each child to the first school on their list that is not already full. ¬†So what is the problem with such a solution? ¬†Well, firstly we can end up with an assignment of children to schools that does not necessarily have maximum cardinality, i.e. there exists another allocation in which a larger number of children are matched to schools. ¬†Also, we may end up with an allocation in which a child c1 ends up at his fifth placed school while another child c2¬†obtains his first-choice, but if c2 moved to his second-choice then c1 would be able to get his third-choice. ¬†If we allow this switch to take place we clearly end up with a “better” allocation.

Despite the poor characteristics obtained from the na√Įve¬†algorithm, I suspect that many many organisations, both governmental and private, use such an algorithm when assigning people to resources when preference information is available. ¬† The problem is that people in charge of building such systems often only see the need to allocate these people to resources in whatever way possible and in the shortest time. ¬†So what happens? ¬†Well they get a system that looks flashy and kind of works. ¬†However a couple of months in when they have a group of people complaining and operating outside the system, the reality finally dawns on them. ¬†I’m sure in the UK alone we could count quite a few computer systems that have fallen under the weight of this pressure.

So we have shown that the na√Įve algorithm does not always work. ¬†Can we safely assume that the heart of a software product lies in those complicated algorithms at its core? ¬†I think we can say so with Google, but who knows in general. ¬†In the example I described I think this is¬†definitely¬†the case. ¬†So what would be the “best” way to allocate these children to schools?

There are different meanings of “best” but all would ensure that the allocation has maximum cardinality, without this we are at a lost cause straight away. Following this we could attempt to maximise the number of children assigned to their first-choice school, then subject to this we maximise the number of children assigned to their second choice school, etc. ¬†Therefore if we¬†receive¬†an angry parent complaining that a child got their third-choice school you can at least tell them that there was no other possible assignment in which some other child with their first or second place does not get worse off. ¬†Hence there is no incentive for the administrator to allow any changes to take place. ¬†This is known as a greedy maximum matching (allocation). ¬†Lastly we need to make sure that these algorithms are efficient, most algorithms are no use to anybody if they are exponential.

So there we go, just remember that na√Įve algorithms are not always (and often not) the best.

some top tips to piss off your colleagues

Every day you stumble into work and go about your daily routine. ¬† You’re reasonably sure that you’re a model work colleague and everyone likes you. ¬†But this is pretty boring right? ¬†So from this day forward you decide to make other people’s life a misery. ¬†Here are a few handy tips guaranteed to help you with this.

(A short note: in case confusion ensues the tone of this article is meant to be somewhat tongue-in-cheek! And please let me know of any annoyances that I may have missed in the comments ūüôā )

  1. Constantly receive incoming phone calls: to be honest it doesn’t even have to be constant, just enough to make it appear constant to those sitting around you. I have no real problems with people talking around me or even playing music out loud (as long as it’s not all the time), but the torture of constantly hearing a one-sided conversation is exasperating.¬†¬†For maximum effect, I suggest you have a really bad ringtone, and make outgoing calls with regular frequency as well.¬†
  2. Take that chicken curry back to your desk so everyone can smell it at lunchtime: I can’t work out whether this is the thing that annoys me most. There is nothing worse than sitting down to do some work and some dude slaps his lunch down on the table 2 feet away. OK, eating certain things at your desk is fine, but there are others that should just be outright banned, and it doesn’t take a genius to figure out if you are eating one of those things. Or maybe it does! My suggestion to obtain the maximum effect with this one is to leave it sitting on your desk for a bit to “cool down” prior to eating it.
  3. Constantly ask people the same questions (or make the same mistakes): I know this one affects more than just me – I will elaborate. Imagine sitting next to someone who constantly asks you the same questions about how to do something. OK, at first they were new and you answered with glee, but it’s been two years since this person started in the job and they still ask you how to the most simple tasks (like how do you build the software). Not only that but when you tell them they always experience a problem, the same problem, and ask you why it’s not working. This is guaranteed to make people start muttering under your breath at you as soon as their name leaves your mouth. Note: This works best if you don’t say their name too loud at first and just lightly let it drift off your lips. This will give the “pink panther effect” where they will gradually think they are going insane. ¬†Those who cannot learn from history are doomed to repeat it.
  4. Find out the names of new buzz technologies, then constantly suggest the inclusion of such things in your app, especially if they are not relevant: remember you don’t need to know what the new technology does or the advantages/disadvantages of it – in fact it’s better that you don’t understand what they are, this heightens the effect. For example, say you want to use AJAX in your legacy desktop COBOL application. To make this even more powerful don’t listen to people when they try to tell you that it’s not relevant, just state “it is” and then bring it up again in the next meeting.
  5. If you’re a manager, have favourites: this is just soooo cool to do if you are a manager. ¬†There is literally no better way to¬†piss off those people who are not on your favourites list. ¬†If you want people to really talk about you, this is the one to go for. ¬†For added value, place all those that you don’t really like in a maintenance team, fixing all those pesky bugs that your favourites create.
  6. If you’re not a manager, and your manager has a favourites list, try to be on it: ¬†everyone hates a brown nose and this will easily get you straight to number one spot of your colleagues diatribe¬†without a shadow of a doubt. ¬†For this to be especially effective you have to be on that list because you know nothing, but are not going to let that get in the way of your career. ¬†Once you are on it though, try to do things that you know will annoy other people, but your boss will let you away with because you are his favourite, e.g. show up late for stuff.
  7. Keep meetings running on longer than they should: if you think a meeting is nearing its conclusion, try and extend it for at least another 30 minutes – you can do so by going over the same points that were discussed in the previous hour of the meeting.
  8. If you’re a manager, send programmers on courses they will really really hate: one course that will probably do the trick is communication skills. ¬†Ensure that the course has lots of role playing, and drops the average programmer waaaaaay out their comfort zone. To really rub it in, make them write a report about it when they get back. ¬†Then when they give you the report, make it obvious that you are never going to read it anyway.
  9. Make people feel completely uncomfortable if they have to ask you a question: the key to this is to make them feel STUPID. ¬†Some possible tips with this are to sigh loudly when they get to your desk; don’t make eye contact when they speak to you, just keep typing and looking at your screen while talking to them; speak to them as condescendingly as possible, phrases such as “it’s obvious!” and “everyone knows that!”, normally hit the mark; and finally show disdain toward any ideas they come up with. ¬†Also, never finish the conversation by confirming that they have actually understood what you have just said, just sit in silence and see how long it takes for them to realise the conversation is over. ¬†Essentially you are trying to be as unapproachable as possible.
  10. Occupy other people’s space: just start to gradually leave pieces of paper and books on the person’s desk next to you. ¬†Don’t limit it to objects though, try to put yourself in their space as well. ¬†Leaving empty¬†mouldy¬†coffee cups, and half eaten food will likely yield a reaction, just don’t let this stop you.